perm filename TIMING.MSG[TIM,LSP]11 blob sn#620906 filedate 1981-10-22 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00127 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00017 00002	∂27-Feb-81  1334	Deutsch at PARC-MAXC 	Re: Timings  
C00019 00003	∂27-Feb-81  1342	Dick Gabriel <RPG at SU-AI> 	Timings    
C00021 00004	∂27-Feb-81  1354	RPG  	Timings  
C00023 00005	∂27-Feb-81  1412	Bruce E. Edwards <BEE at MIT-AI> 	Re: timings
C00025 00006	∂27-Feb-81  1427	Deutsch at PARC-MAXC 	Re: Timings  
C00026 00007	∂27-Feb-81  1502	Deutsch at PARC-MAXC 	Re: Timings  
C00028 00008	∂27-Feb-81  1533	Dick Gabriel <RPG at SU-AI> 	Timings    
C00030 00009	∂27-Feb-81  1616	Earl A. Killian <EAK at MIT-MC> 	Timings     
C00031 00010	∂27-Feb-81  1615	George J. Carrette <GJC at MIT-MC> 	timings  
C00032 00011	∂27-Feb-81  1655	David.Neves at CMU-10A 	Re: Timings
C00033 00012	∂27-Feb-81  1658	David.Neves at CMU-10A 	Re: Timings
C00034 00013	∂27-Feb-81  1710	CSVAX.fateman at Berkeley 	Timings 
C00035 00014	∂27-Feb-81  1719	CSVAX.fateman at Berkeley 	Timings 
C00036 00015	∂27-Feb-81  1730	CSVAX.fateman at Berkeley 	timings 
C00038 00016	∂27-Feb-81  1947	George J. Carrette <GJC at MIT-MC> 	Timings  
C00040 00017	∂27-Feb-81  2002	Howard I. Cannon <HIC at MIT-MC> 	Timings    
C00041 00018	∂27-Feb-81  2008	GYRO at MIT-ML (Scott W. Layson) 	Lisp timings    
C00043 00019	∂27-Feb-81  2048	PDL at MIT-DMS (P. David Lebling) 	[Re: Timings  ]
C00044 00020	∂27-Feb-81  2057	JONL at MIT-MC (Jon L White) 	Timings for LISP benchmarks, and reminder of a proposal by Deutsch    
C00051 00021	∂27-Feb-81  2117	Howard I. Cannon <HIC at MIT-MC> 	Timings for LISP benchmarks    
C00052 00022	∂27-Feb-81  2131	CWH at MIT-MC (Carl W. Hoffman) 	Timings     
C00053 00023	∂27-Feb-81  2201	CSVAX.fateman at Berkeley 	here's a test for you to look at/ distribute    
C00062 00024	∂27-Feb-81  2201	CSVAX.fateman at Berkeley 	Timings for LISP benchmarks, and reminder of a proposal by Deutsch  
C00063 00025	∂28-Feb-81  0916	NEDHUE at MIT-AI (Edmund M. Goodhue) 	Timings     
C00064 00026	∂28-Feb-81  1046	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
C00065 00027	∂28-Feb-81  1109	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
C00066 00028	∂28-Feb-81  1424	Deutsch at PARC-MAXC 	Re: Timings for LISP benchmarks, and reminder of a proposal by 
C00067 00029	∂28-Feb-81  1718	YONKE at BBND 	JONL's message concerning benchmarks    
C00068 00030	∂28-Feb-81  1818	CSVAX.fateman at Berkeley 	why I excluded GC times
C00070 00031	∂28-Feb-81  2014	Guy.Steele at CMU-10A 	Re: Timings 
C00072 00032	∂28-Feb-81  2016	Scott.Fahlman at CMU-10A 	benchmarks    
C00073 00033	∂01-Mar-81  0826	PLATTS at WHARTON-10 ( Steve Platt) 	timing for lisp   
C00074 00034	∂01-Mar-81  1300	RJF at MIT-MC (Richard J. Fateman) 	more lisp mavens   
C00075 00035	∂02-Mar-81  0443	Robert H. Berman <RHB at MIT-MC> 	Timings    
C00076 00036	∂02-Mar-81  0543	Robert H. Berman <RHB at MIT-MC> 	Timings    
C00077 00037	∂02-Mar-81  0741	James E. O'Dell <JIM at MIT-MC> 	Timings
C00080 00038	∂02-Mar-81  1006	Deutsch at PARC-MAXC 	Re: Timings  
C00081 00039	∂02-Mar-81  1312	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
C00082 00040	∂02-Mar-81  1634	RPG  	Lisp Timings  
C00087 00041	∂03-Mar-81  1524	RPG  	Lisp Timing Mailing List
C00089 00042	Here's the first message, which you missed:
C00094 00043	∂04-Mar-81  0449	Robert H. Berman <RHB at MIT-MC> 	Lisp Timing Mailing List  
C00099 00044	∂04-Mar-81  0957	Scott.Fahlman at CMU-10A 	Re: Translators    
C00102 00045	∂04-Mar-81  0959	CSVAX.char at Berkeley 	lisp benchmarking    
C00105 00046	∂04-Mar-81  1627	HEDRICK at RUTGERS 	sometime of possible interest 
C00110 00047	∂06-Mar-81  1301	HES at MIT-AI (Howard Shrobe) 	Methodology considerations:  
C00112 00048	Subject: Lisp Timings Group
C00119 00049	∂10-Mar-81  0727	correira at UTEXAS-11  	lisp timings    
C00121 00050	∂03-Mar-81  2109	Barrow at SRI-KL (Harry Barrow ) 	Lisp Timings    
C00124 00051	∂02-Mar-81  0004	Charles Frankston <CBF at MIT-MC> 	timings   
C00128 00052	∂17-Mar-81  1155	Masinter at PARC-MAXC 	Re: GC 
C00132 00053	∂16-Mar-81  1429	HEDRICK at RUTGERS 	Re: Solicitation    
C00137 00054	∂16-Mar-81  1433	HEDRICK at RUTGERS 	Re: GC    
C00144 00055	∂16-Mar-81  1810	Scott.Fahlman at CMU-10A 	Re: GC   
C00146 00056	∂16-Mar-81  1934	PLATTS at WHARTON-10 ( Steve Platt) 	lisp -- my GC and machine specs  
C00150 00057	∂17-Mar-81  0745	Griss at UTAH-20 (Martin.Griss) 	Re: GC      
C00151 00058	∂17-Mar-81  0837	Robert S. Boyer <BOYER at SRI-CSL> 	Solicitation  
C00155 00059	∂17-Mar-81  0847	Robert S. Boyer <BOYER at SRI-CSL> 	LISP Timings  
C00157 00060	∂17-Mar-81  1155	Masinter at PARC-MAXC 	Re: GC 
C00161 00061	∂17-Mar-81  1218	RPG  	Bureaucracy   
C00162 00062	∂17-Mar-81  1921	Bernard S. Greenberg       <Greenberg at MIT-Multics> 	Re: Solicitation    
C00172 00063	∂31-Mar-81  1451	RPG  	Timing Benchmarks  
C00174 00064	∂01-Apr-81  1550	Masinter at PARC-MAXC    
C00179 00065	∂05-Apr-81  2141	JHL   via LONDON    
C00180 00066	∂05-Apr-81  2217	Carl Hewitt <CARL at MIT-AI> 	Lisp Timing Mailing List 
C00181 00067	∂06-Apr-81  1302	RPG  	Timing benchmark   
C00190 00068	∂06-Apr-81  2007	RPG  
C00192 00069	∂05-Apr-81  0208	H at MIT-AI (Jack Holloway) 	lisp timings    
C00193 00070	∂06-Apr-81  1410	HEDRICK at RUTGERS 	Re: Timing benchmark     
C00196 00071	∂06-Apr-81  1931	Bernard S. Greenberg       <Greenberg at MIT-Multics> 	Re: Timing benchmark
C00199 00072	∂06-Apr-81  2008	HEDRICK at RUTGERS 	Re: Timing benchmark     
C00205 00073	∂06-Apr-81  2007	RPG  
C00207 00074	∂07-Apr-81  0924	RPG  	Rules    
C00211 00075	∂07-Apr-81  1323	JONL at MIT-MC (Jon L White) 	Proposed ''mini'' benchmark, with interpretation. 
C00224 00076	∂10-Apr-81  1051	HEDRICK at RUTGERS 	Re: Rules      
C00232 00077	∂10-Apr-81  1205	George J. Carrette <GJC at MIT-MC> 	Rules    
C00234 00078	∂11-Apr-81  1001	CSVAX.jkf at Berkeley 	result of pairs benchmark on franz.  
C00239 00079	∂13-Apr-81  1320	RPG  
C00241 00080	∂13-Apr-81  1239	RPG  	Groundrules (reprise)   
C00246 00081	∂13-Apr-81  1338	CLR at MIT-XX 	Re: Groundrules (reprise)     
C00249 00082	∂13-Apr-81  1724	YONKE at BBND 	Re: Groundrules (reprise)     
C00251 00083	∂13-Apr-81  1934	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Re: Groundrules (reprise)       
C00256 00084	∂13-Apr-81  2214	HEDRICK at RUTGERS 	Re: Groundrules (reprise)     
C00262 00085	∂21-Apr-81  1316	RPG  	SCCPP    
C00263 00086	∂13-Mar-81  1959	MEEHAN at MIT-AI (James R. Meehan) 
C00265 00087	∂31-Mar-81  1615	Deutsch at PARC-MAXC 	Re: Timing Benchmarks  
C00266 00088	∂21-Apr-81  1604	Greenberg.Symbolics at MIT-Multics 
C00268 00089	∂07-Apr-81  1037	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Re: Rules        
C00270 00090	∂07-Apr-81  1107	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Rules - GC time  
C00274 00091	∂07-Apr-81  2213	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	SCCPP on UCI-Lisp
C00277 00092	∂21-Apr-81  2018	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Lost mail? 
C00289 00093	∂06-Apr-81  1204	RPG  
C00295 00094	∂14-Apr-81  2031	RPG  
C00321 00095	∂22-Apr-81  1801	Bernard S. Greenberg       <Greenberg at MIT-Multics> 	Multics Timing results vindicated  
C00323 00096	∂23-Apr-81  1232	RPG  	FRANZ Benchmark    (FRPOLY)
C00337 00097	∂23-Apr-81  1245	RPG  	Franz benchmark    
C00338 00098	∂24-Apr-81  1324	Bernard S. Greenberg       <Greenberg at MIT-Multics> 	Re: FRANZ Benchmark, Multics Numbers    
C00341 00099	∂24-Apr-81  1414	RPG  	Errata   
C00342 00100	∂24-Apr-81  1608	CSVAX.jkf at Berkeley 	octal vrs decimal
C00343 00101	∂25-Apr-81  1242	Greenberg.Symbolics at MIT-Multics 	Re: octal vrs decimal   
C00344 00102	∂25-Apr-81  1320	Vanmelle at SUMEX-AIM 	Re:  Re: octal vrs decimal 
C00345 00103	∂25-Apr-81  1727	Greenberg.Symbolics at MIT-Multics 	Re:  Re: octal vrs decimal   
C00346 00104	∂25-Apr-81  2210	CSVAX.jkf at Berkeley 	Re:  Re: octal vrs decimal 
C00348 00105	∂24-Apr-81  1411	CSVAX.jkf at Berkeley 	franz timing results  
C00350 00106	∂28-Apr-81  1122	Vanmelle at SUMEX-AIM 	Re: Benchmarks        
C00352 00107	∂28-Apr-81  2115	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Re: FRANZ Benchmark        
C00356 00108	∂02-May-81  1245	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Re: FRANZ Benchmark        
C00362 00109	∂04-May-81  1326	correira at UTEXAS-11  	UTLISP benchmarks    
C00364 00110	∂05-May-81  0643	correira at UTEXAS-11  	SCCPP Timings for UTLISP  
C00366 00111	∂05-May-81  0643	correira at UTEXAS-11  	FRPOLY Timings for UTLISP 
C00372 00112	∂05-May-81  0643	correira at UTEXAS-11  	A thumbnail sketch of UTLISP   
C00381 00113	∂26-May-81  0916	George J. Carrette <GJC at MIT-MC> 	benchmark.    
C00398 00114	∂09-Aug-81  1912	RPG   via CMU-20C 	Vacation   
C00399 00115	∂20-Oct-81  1527	LYNCH at USC-ISIB 	Benchmarks for Interlisp-VAX   
C00403 00116	∂20-Oct-81  1614	Doug Lenat <CSD.LENAT at SU-SCORE> 	Save the Dolphins  
C00412 00117	∂20-Oct-81  1744	pratt@Diablo (SuNet) 	Benchmarks for Interlisp-VAX
C00423 00118	∂21-Oct-81  0109	RPG  	Criterion 1   
C00427 00119	∂17-Oct-81  2340	pratt@Diablo (SuNet) 	Fairness
C00437 00120	∂18-Oct-81  2141	pratt@Diablo (SuNet) 	For what it's worth    
C00442 00121	∂18-Oct-81  2254	RPG@Sail (SuNet) 	Several points:       
C00446 00122	∂19-Oct-81  0935	RINDFLEISCH@SUMEX-AIM (SuNet) 	FYI - Other Lisp Timing Thrashes  
C00454 00123	∂19-Oct-81  1045	pratt@Diablo (SuNet) 	Several points:   
C00458 00124	∂19-Oct-81  1143	RPG@Sail (SuNet) 	Long, silly response to Vaughn Pratt      
C00461 00125	∂19-Oct-81  1545	Jeff Rubin <JBR at S1-A> 
C00463 00126	∂21-Oct-81  1325	RPG  	Wall time
C00470 00127	∂22-Oct-81  2009	George J. Carrette <GJC at MIT-MC> 	timing tests and benchmarks. 
C00472 ENDMK
C⊗;
∂27-Feb-81  1334	Deutsch at PARC-MAXC 	Re: Timings  
Date: 27 Feb 1981 13:32 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings
In-reply-to: RPG's message of 27 Feb 1981 1319-PST
To: Dick Gabriel <RPG at SU-AI>
cc: info-lispm at MIT-AI

Your suggestion sounds great.  What we need is someone to organize the process
just a little.  Such a person would do something like the following:

1) Collect the names of volunteers or contact persons at each site, to send sample
programs to.
2) Collect the sample programs from each site, and disseminate them to the
volunteers or contacts at the other sites.
3) Collect the translated sample programs (in case there was controversy over
whether the translation was "trivial", for example, and for documentation and
historical purposes).
4) Collect the results of the timings run at each site, and disseminate them.

Would you like to volunteer?

∂27-Feb-81  1342	Dick Gabriel <RPG at SU-AI> 	Timings    
Date: 27 Feb 1981 1319-PST
From: Dick Gabriel <RPG at SU-AI>
Subject: Timings  
To:   deutsch at PARC-MAXC
CC:   info-lispm at MIT-AI    

	Since everyone I know of is trying to make a decision about what
to do about Lisp computing in the next five years, perhaps we should
try to co-ordinate a test that will help everyone make a decision.
For instance, each center (PARC, MIT, Stanford, CMU, Berkeley,...)
can provide a program that is of interest to them (not too big, of course);
each test site will then provide someone to re-code (in a very trivial sense:
turning greaterp into >, adding declarations) those programs into reasonably
efficient code for their system. The authors will provide timing data and
timing points in their code.

	Each center may have a few programs since they may have diverse
communities (SAIL and HPP at Stanford). I would be happy to volunteer to
test programs for SAIL MacLisp, which is a 10 version.
			-rpg-



∂27-Feb-81  1354	RPG  	Timings  
To:   deutsch at PARC-MAXC
CC:   RPG at SU-AI, info-lispm at MIT-AI
I will volunteer to co-ordinate the Lisp timing test. I plan to contact:

	Deutsch/Masinter at Parc (InterLisp on MAXC, Dorado, Dolphin...)
	RPG/ROD at SAIL (MacLisp on SAIL, TOPS-20, FOONLY F2)
	VanMelle@SUMEX (InterLisp on TOPS-20)
	Fateman at Berkeley (FranzLisp on VAX)
	Hedrick at Rutgers (UCILISP on TOPS-10?)
	Fahlman/Steele at CMU (SPICELISP on ?, MacLisp on CMU-10)
	HIC at MIT (Lisp Machine)
	JONL at MIT (MacLisp on ITS, NIL on VAX)
	Westfold at SCI (InterLisp on F2)
	Weyhrauch at SAIL (Ilisp on SAIL, LISP1.6 on SAIL)

If anyone has any suggestions about who else to contact or other Lisps
and/or machines to try, let me know soon.

				-rpg-

∂27-Feb-81  1412	Bruce E. Edwards <BEE at MIT-AI> 	Re: timings
Date: 27 February 1981 16:32-EST
From: Bruce E. Edwards <BEE at MIT-AI>
Subject: Re: timings
To: CPR at MIT-EECS
cc: INFO-LISPM at MIT-AI, RWS at MIT-XX

As Peter Deutsch has pointed out this is a crummy benchmark, which was implemented
by relatively unenlighted programming on the CADR. I made it almost 50% faster
in 5 minutes, and the new numbers are much better. They could be made much better,
but basically people aren't interested in hacking uninteresting benchmarks. Things
like a natural language parser or an AI program is more what we are interested in.
There are some data points along this line, but I can't remember the exact numbers.
Hopefully RG has the numbers for the WOODS lunar program tucked away somewhere.

∂27-Feb-81  1427	Deutsch at PARC-MAXC 	Re: Timings  
Date: 27 Feb 1981 14:26 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings
In-reply-to: RPG's message of 27 Feb 1981 1354-PST
To: Dick Gabriel <RPG at SU-AI>

Great!  Perhaps we will finally throw some light into the murk of claims and
counter-claims about Lisp speeds that have been made for many years.

You might consider sending out some kind of announcement to LISP-FORUM
and/or LISP-DISCUSSION at MIT-AI as well -- I'm not sure everyone of interest
is on INFO-LISPM.

∂27-Feb-81  1502	Deutsch at PARC-MAXC 	Re: Timings  
Date: 27 Feb 1981 13:32 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings
In-reply-to: RPG's message of 27 Feb 1981 1319-PST
To: Dick Gabriel <RPG at SU-AI>
cc: info-lispm at MIT-AI

Your suggestion sounds great.  What we need is someone to organize the process
just a little.  Such a person would do something like the following:

1) Collect the names of volunteers or contact persons at each site, to send sample
programs to.
2) Collect the sample programs from each site, and disseminate them to the
volunteers or contacts at the other sites.
3) Collect the translated sample programs (in case there was controversy over
whether the translation was "trivial", for example, and for documentation and
historical purposes).
4) Collect the results of the timings run at each site, and disseminate them.

Would you like to volunteer?


∂27-Feb-81  1533	Dick Gabriel <RPG at SU-AI> 	Timings    
Date: 27 Feb 1981 1354-PST
From: Dick Gabriel <RPG at SU-AI>
Subject: Timings  
To:   deutsch at PARC-MAXC
CC:   RPG at SU-AI, info-lispm at MIT-AI

I will volunteer to co-ordinate the Lisp timing test. I plan to contact:

	Deutsch/Masinter at Parc (InterLisp on MAXC, Dorado, Dolphin...)
	RPG/ROD at SAIL (MacLisp on SAIL, TOPS-20, FOONLY F2)
	VanMelle@SUMEX (InterLisp on TOPS-20)
	Fateman at Berkeley (FranzLisp on VAX)
	Hedrick at Rutgers (UCILISP on TOPS-10?)
	Fahlman/Steele at CMU (SPICELISP on ?, MacLisp on CMU-10)
	HIC at MIT (Lisp Machine)
	JONL at MIT (MacLisp on ITS, NIL on VAX)
	Westfold at SCI (InterLisp on F2)
	Weyhrauch at SAIL (Ilisp on SAIL, LISP1.6 on SAIL)

If anyone has any suggestions about who else to contact or other Lisps
and/or machines to try, let me know soon.

				-rpg-



∂27-Feb-81  1616	Earl A. Killian <EAK at MIT-MC> 	Timings     
Date: 27 February 1981 19:16-EST
From: Earl A. Killian <EAK at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI

I've got a queuing simulation program in MC:EAK;SIMUL > that
while it isn't at all typical of AI, uses an interesting mix of
list and numeric computation, and also runs for a fair time while
being not overly long.  I'm not sure whether its useful to you,
but if it is, let me know.

∂27-Feb-81  1615	George J. Carrette <GJC at MIT-MC> 	timings  
Date: 27 February 1981 17:35-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  timings
To: Deutsch at PARC-MAXC
cc: INFO-LISPM at MIT-MC, masinter at PARC-MAXC, guttag at MIT-XX,
    RWS at MIT-XX

How about using Macsyma? It has some interesting programs in it,
and it has given the Lispmachine quite a work-out on some large
real problems (or did the Lispmachine give macsyma a work out?).

-gjc


∂27-Feb-81  1655	David.Neves at CMU-10A 	Re: Timings
Date: 27 February 1981 1954-EST (Friday)
From: David.Neves at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: Timings
In-Reply-To:  Dick Gabriel's message of 27 Feb 81 16:54-EST
Message-Id: <27Feb81 195427 DN10@CMU-10A>

why not also try TLC lisp on a micro.  ask jra@sail
also BBN's jerico might be relevant but i don't think they
	have a lisp for it.

∂27-Feb-81  1658	David.Neves at CMU-10A 	Re: Timings
Date: 27 February 1981 1957-EST (Friday)
From: David.Neves at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: Timings
In-Reply-To:  Dick Gabriel's message of 27 Feb 81 16:54-EST
Message-Id: <27Feb81 195751 DN10@CMU-10A>

p.s.  also i believe people at BBN are trying to put Interlisp on
 a Prime computer.  If they do have a version up that would be a
 another data point.  i don't know who you would contact though.

∂27-Feb-81  1710	CSVAX.fateman at Berkeley 	Timings 
Date: 27 Feb 1981 16:20:26-PST
From: CSVAX.fateman at Berkeley
To: RPG@SU-AI, deutsch@PARC-MAXC
Subject: Timings
Cc: info-lispm@mit-ai

add Griss@utah-20 (standard lisp on 10, b-1700, ...)

∂27-Feb-81  1719	CSVAX.fateman at Berkeley 	Timings 
Date: 27 Feb 1981 16:22:33-PST
From: CSVAX.fateman at Berkeley
To: RPG@SU-AI, deutsch@PARC-MAXC
Subject: Timings
Cc: info-lispm@mit-ai

add Griss@utah-20 (standard lisp on 10, b-1700, ...)

∂27-Feb-81  1730	CSVAX.fateman at Berkeley 	timings 
Date: 27 Feb 1981 16:43:27-PST
From: CSVAX.fateman at Berkeley
To: Deutsch@PARC-MAXC, GJC@MIT-MC
Subject: timings
Cc: INFO-LISPM@MIT-MC, RWS@MIT-XX, CSVAX.fateman@Berkeley, guttag@MIT-XX, masinter@PARC-MAXC

George: are you offering to put Macsyma up in Interlisp?  We already
have some LM /KL-10/ VAX-11/780 benchmarks (KL-10 maclisp):
Vaxima and Lisp Machine timings for DEMO files
(fg genral, fg rats, gen demo, begin demo)
(garbage collection times excluded.)  Times in seconds.

MC        VAXIMA     128K lm     192K lm    256K lm VAXIMA Jul 80
4.119	   17.250   43.333      19.183     16.483    15.750
2.639	    7.016   55.916      16.416     13.950  
3.141	   10.850  231.516      94.933     58.166   
4.251	   16.700  306.350     125.666     90.716    12.400

(Berkeley CS.VAX 11/780 UNIX April 8, 1980,  KL-10 MIT-MC ITS April 9, 1980.)

improvements due to expanding alike1 and a few odds and ends as macros;
also some improvements in the compiler.


∂27-Feb-81  1947	George J. Carrette <GJC at MIT-MC> 	Timings  
Date: 27 February 1981 22:47-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI
cc: deutsch at PARC-MAXC

I have a usefull benchmark which I just tried in Maclisp at MIT-MC
and on a LISPM. It is code which does line-drawing window-clipping 
for arbitrary convex polygonal regions. This code is in actual use.
If you want to see it, it is on MIT-MC in
[MC:BLIS11;CLIP >]. (yes, I hack BLISS. (wow what a compiler!))
It is a nice example because it tests the speed of the FUNCALL dispatch.
The file is conditionalized to run in either LISPM or Maclisp, and
even includes the timing methods used. I would very much like it
if I could run the same (*exactly*) conditionalized source on
N different systems, that way I would have
(1) greater confidence
(2) an exact knowledged of how things are done differently on the
    different systems. e.g. how much hair one has to go through to
    declare things to the compiler.

-gjc

∂27-Feb-81  2002	Howard I. Cannon <HIC at MIT-MC> 	Timings    
Date: 27 February 1981 23:02-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI

I'll be happy to do the timing tests.
--Howard

∂27-Feb-81  2008	GYRO at MIT-ML (Scott W. Layson) 	Lisp timings    
Date: 27 FEB 1981 2306-EST
From: GYRO at MIT-ML (Scott W. Layson)
Subject: Lisp timings
To: rpg at SU-AI
CC: GYRO at MIT-ML, INFO- at MIT-ML, INFO-LISPM at MIT-ML

I know this is a little silly, but if you have any REALLY tiny
benchmarks (space-wise) I would like to try them out in TLC-Lisp
and muLisp, both running on a 64K Z-80.  These Lisps don't page,
so the program and data have to fit in small real memory.
(Perhaps I should call them "Lisplets"?)

Incidentally, it seems to me that GC time should be included in
the times reported.  Different systems generate garbage at
different rates and deal with it at different efficiencies,
and this shows up in the user-response time of the systems
(which is, after all, what we really want to know).

-- Scott Layson
---------------

∂27-Feb-81  2048	PDL at MIT-DMS (P. David Lebling) 	[Re: Timings  ]
Date: 27 Feb 1981 2348-EST
From: PDL at MIT-DMS (P. David Lebling)
To: rpg at SU-AI
In-reply-to: Message of 27 Feb 81 at 1354 PST by RPG@SU-AI
Subject: [Re: Timings  ]
Message-id: <[MIT-DMS].187847>

You should contact either CLR@MIT-XX or myself for Muddle.
	Dave


∂27-Feb-81  2057	JONL at MIT-MC (Jon L White) 	Timings for LISP benchmarks, and reminder of a proposal by Deutsch    
Date: 27 FEB 1981 2352-EST
From: JONL at MIT-MC (Jon L White)
Subject: Timings for LISP benchmarks, and reminder of a proposal by Deutsch
To: rpg at SU-AI
CC: LISP-DISCUSSION at MIT-MC, BEE at MIT-AI, JHL at MIT-AI
CC: CSVAX.fateman at BERKELEY, RWS at MIT-XX

I notice you sent your proposal to INFO-LISPM, and thought that the
LISP-DISCUSSION community might want to be aware of it too.  (Deutsch and
Masinter are, I believe, on this list, as is Griss).
    Date: 27 Feb 1981 1354-PST
    From: Dick Gabriel <RPG at SU-AI>
    I will volunteer to co-ordinate the Lisp timing test. I plan to contact:
	    Deutsch/Masinter at Parc (InterLisp on MAXC, Dorado, Dolphin...)
	    RPG/ROD at SAIL (MacLisp on SAIL, TOPS-20, FOONLY F2)
	    VanMelle@SUMEX (InterLisp on TOPS-20)
	    Fateman at Berkeley (FranzLisp on VAX)
	    Hedrick at Rutgers (UCILISP on TOPS-10?)
	    Fahlman/Steele at CMU (SPICELISP on ?, MacLisp on CMU-10)
	    HIC at MIT (Lisp Machine)
	    JONL at MIT (MacLisp on ITS, NIL on VAX)
	    Westfold at SCI (InterLisp on F2)
	    Weyhrauch at SAIL (Ilisp on SAIL, LISP1.6 on SAIL)
    If anyone has any suggestions about who else to contact or other Lisps
    and/or machines to try, let me know soon.
The contact for Rutgers-LISP should probably be JOSH@RUTGERS-10
(John Storrs Hall) who is actively extending the formerly-called UCILISP.
Fateman's login name is   CSVAX.fateman@Berkeley   unless there is some 
smarts to his mailer that I don't know about.
Also, I'd like to suggest the following additions
  GRISS@UTAH-20  for "STANDARD-LISP" on PDP10, IBM370, etc
  John Allen (who knows where?) for his "Cromemco" lisp on Z80 etc
  JHL@MIT-AI (Joachim Laubsch, from Stuttgart, West Germany) who might be 
             able to involve the European LISP community.

    I'll also send a letter of these actions to Shigeki Goto of the Nippon 
Telephone Co. in Tokyo, who generated some sort of flurry last fall with his 
incrediblly-simple "benchmark" function TAK.  Actually, TAK may be useful as 
one part of a multi-foliate benchmark, since it specifically test timings 
for (1) function-to-function interface, and (2) simple arithmetic of GREATERP 
and SUB1.  Some of Baskett's benchmarkings score heavily on the array
capabilities, for which FORTRAN compilers "come off smelling like a rose",
and even the fast-arithmetic of MacLISP savors like a garbage dump.

   At the little "lisp discussion" held in Salt Lake City, December 1980,
(attendees partly co-incident with LISP-DISCUSSION mailing list), Peter 
Deutsch made a suggestion which we all liked, but for which there
has been no subsequent action (to my knowledge).  Basically, in order to
educate ourselves into the consequences of the differences between LISP
dialects, and to get some experience in converting "real" code, each
dialect community should nominate a representative piece of "useful code" 
from its enviromment, and the groups responsible for the other
dialects would try to "transport" it into their own.  Several benefits
should accrue:
  (1) If the "representative" is some useful piece of the general environment, 
      say like the DEFMACRO "cache'ing" scheme of MacLISP/NIL, or the
      Interlisp Masterscope, or whatever, then the "transportation" cost 
      will be repaid by having a useful new tool in the other dialects.
  (2) We should accumulate a library of automatic conversion tools, or
      at least of written reports on the problems involved.
  (3) Each community may be affected in a way which (hopefully) will help 
      reduce the hard-core interdialect incompatibilities.
(Apologies to Deutsch for any garbling of the proposal content).

∂27-Feb-81  2117	Howard I. Cannon <HIC at MIT-MC> 	Timings for LISP benchmarks    
Date: 28 February 1981 00:17-EST
From: Howard I. Cannon <HIC at MIT-MC>
Subject:  Timings for LISP benchmarks
To: rpg at SU-AI, deutsch at PARC-MAXC
cc: Greenberg.Symbolics at MIT-MULTICS

I suggest Greenberg.Symbolics@MIT-MULTICS for Multics MacLisp.

∂27-Feb-81  2131	CWH at MIT-MC (Carl W. Hoffman) 	Timings     
Date: 28 FEB 1981 0030-EST
From: CWH at MIT-MC (Carl W. Hoffman)
Subject: Timings  
To: RPG at SU-AI

    Date: 27 Feb 1981 1354-PST
    From: Dick Gabriel <RPG at SU-AI>

    If anyone has any suggestions about who else to contact or other Lisps
    and/or machines to try, let me know soon.

    				-rpg-

You might also contact Richard Lamson or Bernie Greenberg for timings of
MacLisp on various Multics sites.  Net addresses are "Lamson at MIT-Multics"
and "Greenberg at MIT-Multics".

∂27-Feb-81  2201	CSVAX.fateman at Berkeley 	here's a test for you to look at/ distribute    
Date: 27 Feb 1981 21:26:56-PST
From: CSVAX.fateman at Berkeley
To: rpg@su-ai
Subject: here's a test for you to look at/ distribute


;; test from Berkeley based on polynomial arithmetic.

(declare (special ans coef f inc i k qq ss v *x*
		    *alpha *a* *b* *chk *l *p q* u* *var *y*))
(declare (localf pcoefadd pcplus pcplus1 pplus ptimes ptimes1
		 ptimes2 ptimes3 psimp pctimes pctimes1
		 pplus1))
;; Franz uses maclisp hackery here; you can rewrite lots of ways.
(defmacro pointergp (x y) `(> (get ,x 'order)(get ,y 'order)))

(defmacro pcoefp (e) `(atom ,e))
(defmacro pzerop (x) `(signp e ,x))			;true for 0 or 0.0
(defmacro pzero () 0)
(defmacro cplus (x y) `(plus ,x ,y))
(defmacro ctimes (x y) `(times ,x ,y))


(defun pcoefadd (e c x) (cond ((pzerop c) x)
			      (t (cons e (cons c x)))))

(defun pcplus (c p) (cond ((pcoefp p) (cplus p c))
			  (t (psimp (car p) (pcplus1 c (cdr p))))))

(defun pcplus1 (c x)
       (cond ((null x)
	      (cond ((pzerop c) nil) (t (cons 0 (cons c nil)))))
	     ((pzerop (car x)) (pcoefadd 0 (pplus c (cadr x)) nil))
	     (t (cons (car x) (cons (cadr x) (pcplus1 c (cddr x)))))))
	 
(defun pctimes (c p) (cond ((pcoefp p) (ctimes c p))
			   (t (psimp (car p) (pctimes1 c (cdr p))))))

(defun pctimes1 (c x)
       (cond ((null x) nil)
	     (t (pcoefadd (car x)
			  (ptimes c (cadr x))
			  (pctimes1 c (cddr x))))))

(defun pplus (x y) (cond ((pcoefp x) (pcplus x y))
			 ((pcoefp y) (pcplus y x))
			 ((eq (car x) (car y))
			  (psimp (car x) (pplus1 (cdr y) (cdr x))))
			 ((pointergp (car x) (car y))
			  (psimp (car x) (pcplus1 y (cdr x))))
			 (t (psimp (car y) (pcplus1 x (cdr y))))))

(defun pplus1 (x y)
       (cond ((null x) y)
	     ((null y) x)
	     ((= (car x) (car y))
	      (pcoefadd (car x)
			(pplus (cadr x) (cadr y))
			(pplus1 (cddr x) (cddr y))))
	     ((> (car x) (car y))
	      (cons (car x) (cons (cadr x) (pplus1 (cddr x) y))))
	     (t (cons (car y) (cons (cadr y) (pplus1 x (cddr y)))))))

(defun psimp (var x)
       (cond ((null x) 0)
	     ((atom x) x)
	     ((zerop (car x)) (cadr x))
	      (t (cons var x))))

(defun ptimes (x y) (cond ((or (pzerop x) (pzerop y)) (pzero))
			  ((pcoefp x) (pctimes x y))
			  ((pcoefp y) (pctimes y x))
			  ((eq (car x) (car y))
			   (psimp (car x) (ptimes1 (cdr x) (cdr y))))
			  ((pointergp (car x) (car y))
			   (psimp (car x) (pctimes1 y (cdr x))))
			  (t (psimp (car y) (pctimes1 x (cdr y))))))

(defun ptimes1 (*x* y) (prog (u* v)
			       (setq v (setq u* (ptimes2 y)))
			  a    (setq *x* (cddr *x*))
			       (cond ((null *x*) (return u*)))
			       (ptimes3 y)
			       (go a)))

(defun ptimes2 (y) (cond ((null y) nil)
			 (t (pcoefadd (plus (car *x*) (car y))
				      (ptimes (cadr *x*) (cadr y))
				      (ptimes2 (cddr y))))))

(defun ptimes3 (y) 
  (prog (e u c) 
     a1 (cond ((null y) (return nil)))
	(setq e (+ (car *x*) (car y)))
	(setq c (ptimes (cadr y) (cadr *x*) ))
	(cond ((pzerop c) (setq y (cddr y)) (go a1))
	      ((or (null v) (> e (car v)))
	       (setq u* (setq v (pplus1 u* (list e c))))
	       (setq y (cddr y)) (go a1))
	      ((= e (car v))
	       (setq c (pplus c (cadr v)))
	       (cond ((pzerop c) (setq u* (setq v (pdiffer1 u* (list (car v) (cadr v))))))
		     (t (rplaca (cdr v) c)))
	       (setq y (cddr y))
	       (go a1)))
     a  (cond ((and (cddr v) (> (caddr v) e)) (setq v (cddr v)) (go a)))
	(setq u (cdr v))
     b  (cond ((or (null (cdr u)) (< (cadr u) e))
	       (rplacd u (cons e (cons c (cdr u)))) (go e)))
	(cond ((pzerop (setq c (pplus (caddr u) c))) (rplacd u (cdddr u)) (go d))
	      (t (rplaca (cddr u) c)))
     e  (setq u (cddr u))
     d  (setq y (cddr y))
	(cond ((null y) (return nil)))
	(setq e (+ (car *x*) (car y)))
	(setq c (ptimes (cadr y) (cadr *x*)))
     c  (cond ((and (cdr u) (> (cadr u) e)) (setq u (cddr u)) (go c)))
	(go b))) 





!
















(defun pexptsq (p n)
	(do ((n (quotient n 2) (quotient n 2))
	     (s (cond ((oddp n) p) (t 1))))
	    ((zerop n) s)
	    (setq p (ptimes p p))
	    (and (oddp n) (setq s (ptimes s p))) ))



(defun setup nil
  (putprop 'x 1 'order)
  (putprop 'y 2 'order)
  (putprop 'z 3 'order)
  (setq r (pplus '(x 1 1 0 1) (pplus '(y 1 1) '(z 1 1)))) ; r= x+y+z+1
  (setq r2 (ptimes r 100000)) ;r2 = 100000*r
  (setq r3 (ptimes r 1.0)); r3 = r with floating point coefficients
  )
; time various computations of powers of polynomials, not counting
;printing but including gc time ; provide account of g.c. time.

; The following function uses (ptime) for process-time and is thus
;  Franz-specific.

(defun bench (n)
  (setq start (ptime)) ;  Franz ticks, 60 per sec, 2nd number is GC
  (pexptsq r n) 
  (setq res1 (ptime))
  (pexptsq r2 n)
  (setq res2 (ptime))
  ; this one requires bignums.
  (pexptsq r3 n)
  (setq res3 (ptime))
  (list 'power=  n (b1 start res1)(b1 res1 res2)(b1 res2 res3)))
(defun b1(x y)(mapcar '(lambda(r s)(quotient (- s r) 60.0)) x y))

;instructions:
;  after loading, type (setup)
; then (bench 2) ; this should be pretty fast.
; then (bench 5)
; then (bench 10)
; then (bench 15)
;... 

∂27-Feb-81  2201	CSVAX.fateman at Berkeley 	Timings for LISP benchmarks, and reminder of a proposal by Deutsch  
Date: 27 Feb 1981 21:32:33-PST
From: CSVAX.fateman at Berkeley
To: JONL@MIT-MC, rpg@SU-AI
Subject: Timings for LISP benchmarks, and reminder of a proposal by Deutsch
Cc: BEE@MIT-AI, JHL@MIT-AI, LISP-DISCUSSION@MIT-MC

I have sent an entry (polynomial arithmetic system) to rpg@su-ai.
He can examine and redistribute.
  ( fateman@berkeley is equivalent to csvax.fateman@berkeley...)

∂28-Feb-81  0916	NEDHUE at MIT-AI (Edmund M. Goodhue) 	Timings     
Date: 28 FEB 1981 1215-EST
From: NEDHUE at MIT-AI (Edmund M. Goodhue)
Subject: Timings  
To: RPG at SU-AI

I suggest you add Jim Meehan at UCI (maintainer of UCI LISP) who can
run benchmarks on UCILISP and MLISP on both TOP-10 and TOPS-20.  UCI
is not on the net but he can be reached via MEEHAN@MIT-AI.

Ned Goodhue

∂28-Feb-81  1046	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
Date:     28 February 1981 1343-est
From:     Barry Margolin             <Margolin at MIT-Multics>
Subject:  Re: Timings
To:       RPG at SU-AI
Cc:       info-lispm at MIT-AI

I think you should also contact someone at MIT-Multics, where they run
MacLisp, although I'm not sure who you should contact.

∂28-Feb-81  1109	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
Date:     28 February 1981 1343-est
From:     Barry Margolin             <Margolin at MIT-Multics>
Subject:  Re: Timings
To:       RPG at SU-AI
Cc:       info-lispm at MIT-AI

I think you should also contact someone at MIT-Multics, where they run
MacLisp, although I'm not sure who you should contact.

∂28-Feb-81  1424	Deutsch at PARC-MAXC 	Re: Timings for LISP benchmarks, and reminder of a proposal by 
Date: 28 Feb 1981 14:23 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings for LISP benchmarks, and reminder of a proposal by
 Deutsch
In-reply-to: JONL's message of 27 FEB 1981 2352-EST
To: rpg at SU-AI, LISP-DISCUSSION at MIT-MC, BEE at MIT-AI, JHL at MIT-AI,
 CSVAX.fateman at BERKELEY, RWS at MIT-XX

JONL accurately represented the content of my proposal.  The set of programs
being submitted for timing tests might indeed be a useful place to start.

∂28-Feb-81  1718	YONKE at BBND 	JONL's message concerning benchmarks    
Date: 28 Feb 1981 2009-EST
Sender: YONKE at BBND
Subject: JONL's message concerning benchmarks
From: YONKE at BBND
To: RPG at SU-AI, Lisp-Discussion at MIT-MC
Message-ID: <[BBND]28-Feb-81 20:09:20.YONKE>

I'd like to add Interlisp on Jericho (our in-house machine).
Also, since BBN has several different flavors of DEC hardware
which run TOPS-20, I wouldn't mind supplying these different
timings and they would probably more informative than Kurt's
(VanMelle) from SUMEX.

Martin

∂28-Feb-81  1818	CSVAX.fateman at Berkeley 	why I excluded GC times
Date: 28 Feb 1981 17:15:23-PST
From: CSVAX.fateman at Berkeley
To: HES@MIT-AI
Subject: why I excluded GC times
Cc: CSVAX.fateman@Berkeley, info-lispm@mit-mc, lisp-discussion@mit-mc

including GC times makes for a very messy statistical situation.
GC time (or even if it happens at all) is dependent on the virtual
address space in use at the time, how much of the macsyma system
has been loaded (in the case of the KL-10), etc.  I do not know
about the LM figures, since I am only reporting stuff sent to me,
but the KL-10 and the VAX typically spend 30% additional time in
GC, averaged over various "production" runs.  Trading off GC time
for system paging time is a funny business, though I agree it
is important.


∂28-Feb-81  2014	Guy.Steele at CMU-10A 	Re: Timings 
Date: 28 February 1981 2313-EST (Saturday)
From: Guy.Steele at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: Timings
In-Reply-To:  Dick Gabriel's message of 27 Feb 81 16:54-EST
Message-Id: <28Feb81 231341 GS70@CMU-10A>

You may want to get in touch with the people at Utah (Standard LISP)
for various machines, and maybe John Allen (who has implementations
for micros, for low end of curve).

Also let me note that you are likely to get a great CACM article or
soemthing out of distilling all this stuff if you want; more power
to you.  I'll coordinate running tests on SPice LISP, though that
may take some time to materialize.
--QW
xxx
--Q

∂28-Feb-81  2016	Scott.Fahlman at CMU-10A 	benchmarks    
Date: 28 February 1981 2315-EST (Saturday)
From: Scott.Fahlman at CMU-10A
To: rpg at su-ai
Subject:  benchmarks
Message-Id: <28Feb81 231549 SF50@CMU-10A>


Hi,
I just added my name to Lisp discussion recently and seem to have missed
something.  Exactly what benchmarks are you running/getting people to
run?  If there was a message that kicked all of this off, I would be
interested in seeing it.

We will be happy to add Spice Lisp on Perq benchmarks when the time comes,
but we won't be ready till summer.
-- Scott

∂01-Mar-81  0826	PLATTS at WHARTON-10 ( Steve Platt) 	timing for lisp   
Date:  1 Mar 1981 (Sunday) 1124-EDT
From: PLATTS at WHARTON-10 ( Steve Platt)
Subject: timing for lisp
To:   rpg at SU-AI

  ...if the systems are not *too* big, I'd like to try them on my micro
(Z80) lisp....  rough limits -- stack is a few hundred calls deep (I can
relink to change this if necessary), cell space is limited to roughly
10K cells.  Most basic major lisp functions (a la maclisp, for the most
part) are implemented, others can be added.
   -Steve

∂01-Mar-81  1300	RJF at MIT-MC (Richard J. Fateman) 	more lisp mavens   
Date:  1 MAR 1981 1600-EST
From: RJF at MIT-MC (Richard J. Fateman)
Subject: more lisp mavens
To: rpg at SU-AI

Try boyer@sri-kl.  They have an F2, and Boyer undoubtedly
could supply theorem-prover benchmark.

∂02-Mar-81  0443	Robert H. Berman <RHB at MIT-MC> 	Timings    
Date: 2 March 1981 07:43-EST
From: Robert H. Berman <RHB at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI
cc: deutsch at PARC-MAXC

Please add me to your timing test survey. I have several
suggestions of features that I would like to know about.

Thanks.

--Bob

∂02-Mar-81  0543	Robert H. Berman <RHB at MIT-MC> 	Timings    
Date: 2 March 1981 08:43-EST
From: Robert H. Berman <RHB at MIT-MC>
Subject:  Timings  
To: RPG at SU-AI
cc: deutsch at PARC-MAXC

Please add me to your timing test survey. I have several
suggestions of features that I would like to know about.

Thanks.

-Bob

∂02-Mar-81  0741	James E. O'Dell <JIM at MIT-MC> 	Timings
Date: 2 March 1981 10:40-EST
From: James E. O'Dell <JIM at MIT-MC>
Subject:  Timings
To: Margolin at MIT-MULTICS
cc: RPG at SU-AI

    Date: 28 February 1981 1343-est
    From: Barry Margolin <Margolin at MIT-Multics>
    To:   RPG at SU-AI
    cc:   info-lispm at MIT-AI
    Re:   Timings

    I think you should also contact someone at MIT-Multics, where they run
    MacLisp, although I'm not sure who you should contact.

If the timings don't take too long to work up I'd be glad to run the
Multics Lisp trials. As you might know we have a Macsyma running there
now, version 293. It typically runs at .6 of a MC. The tricky thing is that
on some BIG problems it runs as fast or faster than MC because of its
larger address space. It spends less of its time collecting garbage than
on MC. I feel that this is an important factor.

At least on of the timings should CONS up a storm. We have had problems
with address space on both the LISPM and on 10's. Some large Macsyma
probems use up all of the address space on the LISPM because we don't run
with the garbage collector. GC'ing on the LISPM slows things down a lot.

I also think that the LISPM is being unfairly compared because of its
single user nature. The numbers do not accurately reflect the responsiveness
observed by the user.


∂02-Mar-81  1006	Deutsch at PARC-MAXC 	Re: Timings  
Date: 2 Mar 1981 10:06 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timings
In-reply-to: RPG's message of 27 Feb 1981 1354-PST
To: Dick Gabriel <RPG at SU-AI>
cc: Masinter

Please take me off the list of people doing Lisp timings.  Larry Masinter or
someone else at PARC who is actively working on Lisp (which I am not) is more
appropriate.

∂02-Mar-81  1312	Barry Margolin             <Margolin at MIT-Multics> 	Re: Timings
Date:     2 March 1981 1610-est
From:     Barry Margolin             <Margolin at MIT-Multics>
Subject:  Re: Timings
To:       JIM at MIT-MC
Cc:       RPG at SU-AI

Bernie Greenberg has already been volunteered to do the Multics MacLisp
timings, although I'm sure he won't mind your help, especially when it
gets to Macsyma timings.

∂02-Mar-81  1634	RPG  	Lisp Timings  
To:   info-lispm at MIT-AI, lisp-discussion at MIT-AI,
      "#TIMING.MSG[TIM,LSP]" at SU-AI
	As most of you know, there will be an attempt made to do a
series of Lisp timings in which various benchmarks submitted by the
Lisp community are tested on a variety of different Lisp systems.
Since there will need to be some translations done in order to run
these benchmarks in systems for which they were not intended, there
is the secondary (!) problem of learning what is really needed to do
these translations more readily in the future.

	I will be co-ordinating this effort and will be distributing
the results when they are in. For this purpose I have set up 3
mailing lists at Stanford:

	LISPTIMING 	 the list of people interested in this topic
	LISPTRANSLATORS, the list of people who have volunteered
			 to do the timing tests (and translations)
			 at the various sites
	LISPSOURCES	 the list of people who will be supplying
			 benchmarks

	You can MAIL to these entities at SAIL (e.g. MAIL
LISPTIMING@SAIL...)  and thus avoid swamping the mailing lists we
have beenusing so far.

	If you care to be on one of these lists, please send me
(rpg@sail) your login name and site exactly as your mailer will
understand it along with which list you wish to be on. If you are
supplying programs or talent, please let me know which Lisp, which
machine, and which OS you are representing.

	In addition, a list of all messages pertaining to this
extravaganza will be on TIMING.MSG[TIM,LSP] at SAIL (you can
FTP from SAIL without logging in). In general, this area will
contain all of the information, programs, and results for this
project.

	If you know of anyone who is not on the net and who may be
able to help, send me a message and a method for getting in touch
with him/her. Over the next few days I hope to establish some of the
methodological considerations (such as GC times) for the project.

			Dick Gabriel	(RPG@SAIL)

∂03-Mar-81  1524	RPG  	Lisp Timing Mailing List
To:   "@LSPTIM.DIS[P,DOC]" at SU-AI   
	Welcome to the Lisp Timing mailing list. As you may have
already guessed, the scope of the Lisp Timing Evaluation project is
very large in scope, and if we are to make a contribution to the
undertanding of how to evaluate such an elusive thing as an entire
computing environment we will need to consider many methodological
issues. Since I am no expert on such evaluations I am going to require
a good deal of help, and so far more than 20 people have volunteered.

	The problems we face are not just how to measure the performance 
of these Lisp systems, but how to take a diverse set of benchmark
programs and get them to run on systems very different than those they
were written for.

	I hope at the end of this project to be able to report not
only times for programs, but descriptions of systems, translation
problems, and a general guide to the world of Lisp computing.

	The first substantive mailing will be a quick list of 
methodological points we need to consider. This list is not complete,
but aims at the directions we need to go before actual timing runs
can be performed.

	Thank you for your help in this project.

			Dick Gabriel (RPG@SAIL)

Here's the first message, which you missed:
∂03-Mar-81  1616	RPG  	Methodology considerations:  
To:   "@LSPTIM.DIS[P,DOC]" at SU-AI   
1. GC time is critical. Every timing should include CPU time
as measured by the CPU clock plus GC time. If GC time is not
accounted in the LISP, we should include a standard test, such
as a program that creates a large, standard structure (balanced
tree of some sort?) and then count CPU time on a forced GC, resulting
in a seconds/cell figure for each machine.  Maybe we should do this
in addition to the benchmarks? [In fact, measuring GC time in a meaningful
way is que difficult due to different algorithms. Perhaps a range of
tree structures? Maybe not all algorithms are symmetric on car/cdr?]

2. Translating non-standard control structures can be a problem.
What about non-local goto's ala catch/throw? These can be simulated
with ERROR/ERRSET or with spaghetti hackery in InterLisp. These questions
should be addressed by having each translator propose various techniques 
and having the source decide on which to use. Or maybe we should use
all such methods?

3. All non-LISP syntax must be pre-expanded (i.e. CLISP) to allow
the local system to optimize as appropriate.

4. Both interpreted and compiled code will be timed.
All code will have macros pre-expanded (at local sites?) so that
efficiencies due to incremental destructive expansion can be
eliminated. 

5. Numeric code should have all types announced to the translators by the
sources so that optimizations can be made without deductions.
All other such information must be provided.

6. The size of such programs can be arbitrary, though translating
MACSYMA may take a while to do. 

7. All tools developed to aid translation should be forwarded to
RPG so that they may be evaluated and distributed if appropriate.

8. Programs that are useful to a source but impossible (in a
practical sense) to translate should merit special attention to 
decide if there is a useful feature involved.

9. (from GLR)
Timing various programs is a good idea, but the data will
be a little hard to extrapolate.  Is anyone going to measure
parameters such as CONS rate, time to invoke a procedure,
and add times? [Not only that, but number CONSing and its
effect on numeric calculations should be measured as well. Here
RPG will appoint some experts (like JONL) to make up some
good numeric testing code to isolate implementational problems
with specific aspects of Lisp).

10. People should supply some estimate of the runtime and the results
of their benchmarks. Such things as 2 minutes of CPU on a KL running
TOPS-10 is OK, but for unfamiliar machines/Lisps this may not be good enough.
Try to aim at some estimate in terms of the number of CONSes or function
call depth.

11. Every candidate system should have a detailed description of that
description (CPU architecture, memory size, address size, paging algorithm...)

∂04-Mar-81  0449	Robert H. Berman <RHB at MIT-MC> 	Lisp Timing Mailing List  
Date: 4 March 1981 07:48-EST
From: Robert H. Berman <RHB at MIT-MC>
Subject:  Lisp Timing Mailing List
To: RPG at SU-AI
cc: " @LSPTIM.DIS[P,DOC]" at SU-AI


May I suggest the following as a benchmark for numerically orientated
problems: the time it takes to do a fast fourier transform of, say length
1024, of real or complex data.


I have been collecting over a period of 6 years timings for this
statistics on a wide range of machines (nearly 50) and compilers,
assemblers etc. Thus, this benchmark would be very helpful
in relating Lisp machine performance to many other architectures.

I have a class of problems that I run that use transform methods
for solving partial differential equations and performing
covolutions and smoothing. Hence my interest in ffts.

Several points to keep in mind about this benchmark:

1. On LM's there is a difference between small flonums and flonums.
Suppose it were done with macsyma's bigfloat package to allow
for  extended precision.

2. Fast Fermat (Integer) Transforms are also helpful here. Integers
in the range 0 to 2↑20, say, can be as useful as small
flonums, but they use only integer arithmatic.

3. Power of 2 transforms, and their families, radix 2, rdaix 4+2,
radix 8+4+2, etc, can do some of their by shifting, rather than
dividing. But other bases, i.e. 96 instead of 64 or 128, can be more
efficient than doubling the transform length.

4. The internal data representation can make a difference. Local
variables on the stack of a subroutine are faster to reference than
arrays. I understand there is an architecturial limit of 64 stack
variables on LM's. Would it ever to be possible to change it? In a 4+2
algorithm, the fastest trasnform using stack variables only could then
be a 256 length transform, and then there would a degradation for
longer transforms that used array references.

5. I don't have a completely good feeling yet for all of the
subtleties and speedups available for microcoding a problem
vs writing in lisp, interpreting, compiling, or compiling
into microcode. When a segment of code is going to be used over and
over again, and the code isn't going to change, shouldn't it be
in microcode?

6. I can make several fft packages avaialable in lisp now. One is a
naive radix 2 butterfly algorithm, designed to be short to write and
implement in a hurry. The second is a radix 4+2 and radix 96 familiy
of transforms that were writen for a vector architecure like the Cray,
but translated nearly literally into lisp. Because the Cray encourages
temporary vectors, this radix 4+2 algorithm uses a lot of storage,
rather than transforms in place. I have not yet looked into the issues
I raised in 4.or 5., but these need attention as well.

--  Bob Berman  (rhb@mc)

∂04-Mar-81  0957	Scott.Fahlman at CMU-10A 	Re: Translators    
Date:  4 March 1981 1212-EST (Wednesday)
From: Scott.Fahlman at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: Translators
CC: guy.steele at CMU-10A
In-Reply-To:  Dick Gabriel's message of 3 Mar 81 19:22-EST
Message-Id: <04Mar81 121256 SF50@CMU-10A>


Dick,
I notice in an earlier message that it was contemplated that a full set of
timings be done on CMU's modified TOPS-10 system running MACLISP.  As a 
point of information, all serious Maclisp work here has been moved to the
2060, now that we have one.  I think that running benchmarks for an obsolete
and obviously brain-damaged system which nobody should ever again be forced to
use for anything seriosu would be a waste of time, and I am not likely to
want to devote any effort to it (although the task would be relatively small
if we get things already translated into legal Maclisp, since the differences
are few).  I could devote some small amount of effort to benchmarking TOPS-20
maclisp, though there are other sites that have this as well and I would prefer
that they carry a good deal of the load on this.

We are willing, even eager, to get timings for Spice Lisp on the extended PERQ
(once we get an extended PERQ), but this effort will lag the others by 6 months
of so while we get our act together.  I would prefer to save our translation
and measurement cycles for that task, since lots of places can check out a
Maclisp.

All of this looks fairly interesting.  It may generate more heat than light,
but at least there will be some data to base the flames on, and the translation
aids should be a very useful side effect.
-- Scott

∂04-Mar-81  0959	CSVAX.char at Berkeley 	lisp benchmarking    
Date: 4 Mar 1981 09:00:47-PST
From: CSVAX.char at Berkeley
To: rpg@sail
Subject: lisp benchmarking
Cc: anlams!boyle@Berkeley, CSVAX.char@Berkeley, CSVAX.fateman@Berkeley

Richard Fateman has informed me of the effort you're organizing to
compare Lisp systems.  James Boyle (csvax.anlams!boyle@BERKELEY) and I
(csvax.anlams!char@BERKELEY) would like to be put on your mailing list
for lisp benchmarking.  We have a program, part of a program
transformation system, which you may be interested in including in the
benchmarking.  It currently runs on Franz, and on the IBM370 Lisp
available at Argonne.  We could create a special version of the code
that predefines variables instead of reading their values off of files;
I/O was the only real problem I had in converting the program to Franz
this past fall.  It is an interesting program in that it is a "real"
application of Lisp -- people have used the transformation system for
development of math software here at Argonne, as preprocessor to a
theorem prover, etc.  It is not so interesting from the viewpoint of
exercising a lot of different Lisp features --  mainly list access and
creation, and CONDing.  Jim Boyle estimates that an interesting
benchmark run would take 30-60 min. of Vax cpu time running under Franz
(interpreted).  This might be too long for benchmarking, if testing
resources are expensive.

∂04-Mar-81  1627	HEDRICK at RUTGERS 	sometime of possible interest 
Date:  4 Mar 1981 1919-EST
From: HEDRICK at RUTGERS
Subject: sometime of possible interest
To: rpg at SU-AI

I am not entirely clear what is going on with your lisp timings
mailing list.  However you may possibly be interested in
looking at the file [rutgers]<hedrick>newlisp.  You can FTP it
without logging in I think.  I you have to log on over FTP,
specify user name ANONYMOUS and any password.  This describes
the various tests I have done during design of ELISP, the new
extended addressing version of UCI Lisp for Tops-20.  I think
ELISP will not have much in the way of innovations.  It in
intended to be quite "classical".  I.e. something that we know
how to do, and know that the results of will be useful for us.
It is Lisp 1.6/UCI Lisp constructed with Lisp machine technology
(to the extent we can do it on the 20, no CDR-coding, since that
requires micro code changes.  But we do using a copying GC and
everything is done with typed pointers.)  I expect the performance to be
similar to that of UCI Lisp, as the basic structures will be the same.
It will differ mostly because of completely different data
representations and GC methods.  And because of extended addressing,
which on the 20 still has performance problems.  NEWLISP refers to these
problems without explaining them.  The main problem is in the design of
the hardware pager. This is the thing that maps virtual to physical
addresses.  It should be associative memory, but is implemented by a
table. The net effect of the table is that the same entry is used for
pages 1000, 3000, 5000, 7000, etc.  In fact, which line in the table is
used is determined by bits 774 of the page number (i.e. pages
1000,1001,1002, and 1003 are all stored in the same line).  There is a
kludge to prevent odd numbered sections from interfering with even
numbered ones (The section number is bits 777000), which is why I listed
pages 1000, 3000,etc., and not 0, 2000, ...  If you happen to be
unlucky, and have code in page 1000, a stack in page 3000, and
data in page 5000, your code can easily run a factor of 20 times 
slower than it would otherwise.  By carefully positioning various
blocks of data most of the problems can be prevented.

Please not that ELISP is intended to be a quick and safe implementation.
That means that I am not trying to get the last few percent of efficiency.
I am doing things in ways that I believe will not require much debugging
time, even at the price of speed.  This is because I am a manager, and
don't have much time to write code or to support it after it is finished.
-------

∂06-Mar-81  1301	HES at MIT-AI (Howard Shrobe) 	Methodology considerations:  
Date:  6 MAR 1981 1556-EST
From: HES at MIT-AI (Howard Shrobe)
Subject: Methodology considerations:  
To: RPG at SU-AI

Re your comment about including GC time. I agree wholheartedly and have been
having a bit of disagreemnet with Fateman about same.  IN addition I would
suggest trying to get statistics on how time shared machines degrade with load.
A lot of folks are trying to make estimates of personal versus time shared and
such choices can only be made if you know how many people can be serviced by a
VAX (KL-10, 2060, etc.) before performance drops off.  Some discussion of this
issue would be useful to such folks.

howie Shrobe

Subject: Lisp Timings Group
To: rpg at SU-AI
cc: correira at UTEXAS-11

Hi.  I've been involved with the maintenance/extensions of two lisps, UTLISP
(on CDC equipment) and UCILISP (Rutgers Lisp, etc).  One of the things that I
did in our version of UCILISP that was missed by Lefaivre (and, hence, Meehan)
was to speed up the interpreter.  (Lefaivre got a copy of my source shortly
before I made the speed ups.)  It actually wound up being a few percent faster
than MACLISP (both on the same TOPS-10 machine).  (I believe MACLISP source
code is close enough to make the same changes - this was very old/unchanged
code in the interpreter.)

Anyway, I'd like to volunteer running the tests on UCI Lisp on both a 2060
(TOPS-20) and a KI-10 (TOPS-10).  I'm a little hesitant about committing
myself to too much work but it looks like you'll have several people running
UCI Lisp so maybe the work will be spread around.  (I guess this means that
you should add me to your LISPTIMING and LISPTRANSLATORS lists.)

For easily transportable code, I'll run it on UTLISP but for any extensive
changes I'll pass.  The current person who is in charge of that Lisp may send
you a separate note.  I've tried to encourage him to do so.  The UTLISP was
(at one time) judged by the Japanese surveyers to be the fastest interpreted
Lisp.  (That is my recollection of the first survey that we were involved in,
sometime about the mid 70's?.  I'm sure it was solely due to the speed of the
hardware.)  It is not an elegant Lisp and has a lot of faults but is a pretty
fast interpreter.  The compiler is a crock - when it works.  It was someone's
masters thesis in the early 70's.

I strongly suggest that you run each of the various Lisps on different CPUs
whenever possible.  There was a note out last fall that compared Interlisp,
Maclisp, and UCI Lisp.  You may remember that I sent out a note that
complained that the timings for UCI Lisp were obviously on a different CPU
(probably a KI-10 compared to KL-10 or 2060).

I also suggest that while general purpose benchmarks may show a general
tendency, we should strive for timings of specific operations.  Such things as
CONS (including GC time), variable binding, function calling, arithmetic,
property list manipulation, array manipulation, stack manipulation (I guess
that's in function calling/variable binding), tree traversing (CAR/CDR
manipulations), FUNARG hacking, COND evaluations, PROG and looping hacking,
etc.  Personally my programs don't use much arithmetic so I don't think that's
too important but obviously some people do.

It would also be useful if people could supply timings of the machine the LISP
is run on.  Such things as instruction fetch times and memory speed are
obviously important.  This might be useful in comparing two Lisps on different
machines.  (Exactly how does a CYBER-170/750 compare with a DEC-2060?)

I don't think that the programs need to be very big or long-running.  They
just need to run long enough (10 seconds?) to minimize minor timing problems.
The important thing is that the various programs concentrate on one specific
area as much as possible.  Of course, all this needs to be balanced by some
programs that have a general mix of operations.

Another possible test, which is not really a timing test, would be to give all
us hackers some particular programming project which would take on the order
of an hour to do.  We would each do it in our own Lisp and report how long it
took us to do it (clock time) and how much resources we used (CPU time).  It
might be also reasonable to report how we did it (eg, used EMACS or some other
editor to write/fix the code versus edit in Lisp itself, how many functions
(macros?), how much commenting, how transparent/hackish the code is, etc.)  I
don't mean that this should be a programming contest but it might give some
idea what is involved in writing a program in each Lisp.  This involves
composing, executing, debugging, and compiling.  I feel this would be a truer
test of a LISP in a typical research situation if we could (hah!) discount the
various programmers skills/resources.  (This suggestion should really stir
up some flames!!)

	Mabry Tyson
	(tyson@utexas-11)
-------

∂10-Mar-81  0727	correira at UTEXAS-11  	lisp timings    
Date: 10 Mar 1981 at 0916-CST
From: correira at UTEXAS-11 
Subject: lisp timings
To: rpg at su-ai
cc: atp.tyson at utexas-20

If anyone is interested, I would be willing to do the work to run the
timing programs for UTLISP Version 5.0.  This is the latest version of
UTLISP, containing code to drag the dialect into the 20th Century of
LISP interpreters.  It has been my experience in the past that
most people shrug UTLISP off with a "oh, that's the one with the extra
pointer field" comment, but I think it is a pretty good LISP now and should be
included in the timings effort. However, the compiler is still a complete
crock (although I am working on a new one, it won't be ready for at least
6 months), so I will pass on doing compiler timings.  Please add my name to
the LISPTIMING and LISPTRANSLATORS mailing lists.

					Alfred Correira
					UTEXAS-11
-------

∂03-Mar-81  2109	Barrow at SRI-KL (Harry Barrow ) 	Lisp Timings    
Date:  3 Mar 1981 1727-PST
From: Barrow at SRI-KL (Harry Barrow )
Subject: Lisp Timings
To: rpg at SU-AI

	I would certainly like to be on your list of recipients of
LISP timing information.   Please add BARROW@SRI-AI to your list.

Did you know that Forrest BAskett has made some comparative timings
of one particular program (cpu-intensive) on several machines, in
several languages?   In particular, LISP was used on DEC 2060, KL-10,
KA-10, and MIT CADR machines   (CADR came out comparable with a KA-10,
but about 50% better if using compiled microcode).

What machines do you plan to use?   I would be very interested to
see how Dolphins, Dorados, and Lisp machines compare...


				Harry.



-------

Yes, I know of Baskett's study. There is at least one other Lisp
study, by Takeuchi in Japan.

So far we have the following Lisp systems with volunteers to
do the timings etc:

Interlisp on MAX, Dolphin, Dorado
MacLisp on SAIL
InterLisp on SUMEX
UCILISP on Rutgers
SpiceLisp on PERQ
Lisp Machine (Symbolics, CADR)
Maclisp on AI, MC, NIL on VAX, NIL on S1 (if available)
InterLisp on F2
Standard Lisp on TOPS-10, B-1700, LISP370
TLC-lisp and muLisp on z-80
Muddle on DMS
Rutgers Lisp
Lisp Machine
UCILISP and MLISP on TOPS-10, TOPS-20
Jericho InterLisp
some Z80 LISP
Multics Maclisp
Cromemco Lisp on Z80
Franz Lisp on VAX UNIX
∂02-Mar-81  0004	Charles Frankston <CBF at MIT-MC> 	timings   
Date: 2 March 1981 00:55-EST
From: Charles Frankston <CBF at MIT-MC>
Subject: timings
To: CSVAX.fateman at BERKELEY
cc: LISP-FORUM at MIT-MC, masinter at PARC-MAXC, RWS at MIT-XX,
    guttag at MIT-XX

It is rather obvious that the timings you distributed are wall times for
the Lisp Machine, whereas the Vax and MC times count only time spent
directly executing code that is considered part of Macsyma.  Ie. the
Vax and MC times exclude not only garbage collection, but operating system
overhard, disk i/o and/or paging, time to output characters to terminals, etc.

I submit comparing wall times with (what the Multics people call) "virtual
CPU" time, is not a very informative excercise.  I'm not sure if the Lisp
Machine has the facilities to make analagous measurements, but everyone
can measure wall time, and in some ways thats the most useful comparison.
Is anyone willing to try the same benchmarks on the Vax and MC with just
one user on and measureing wall times?

Also, are there yet any Lisp machines with greater than 256K words?  No
one would dream of running Macsyma on a 256K word PDP10 and I presume that
goes the same for a 1 Megabyte Vax.  The Lisp Machine may not have a time
sharing system resident in core, but in terms of amount of memory needed
for operating system overhard, the fanciness of its user interface
probably more than makes up for that.  I'll bet another 128K words of
memory would not be beyond the point of diminishing returns, insofar
as running Macsyma.

Lastly, the choice of examples.  Due to internal Macsyma optimizations,
these examples have a property I don't like in a benchmark.  The timings
for subsequent runs in the same environment differ widely from previous
runs.  It is often useful to be able to factor out setup times from a
benchmark.  These benchmarks would seem to run the danger of being dominated
by setup costs.  (Eg. suppose disk I/O is much more expensive on one system;
that is probably not generally interesting to a Macsyma user, but it could
dominate benchmarks such as these.)

I would be as interested as anyone else in seeing the various lisp systems
benchmarked.  I hope there is a reasonable understanding in the various
Lisp communities of how to do fair and accurate, else the results will be
worse than useless, they will be damaging.


∂17-Mar-81  1155	Masinter at PARC-MAXC 	Re: GC 
Date: 17 Mar 1981 11:54 PST
From: Masinter at PARC-MAXC
Subject: Re: GC
In-reply-to: RPG's message of 16 Mar 1981 1234-PST
To: Dick Gabriel <RPG at SU-AI>
cc: LispTiming@su-ai, LispTranslators at SU-AI

Interlisp-D uses a reference-count garbage collection scheme. Thus, "garbage
collection" overhead is distributed to those functions which can modify reference
counts (CONS, RPLACA, etc.) with the following important exceptions:

	no reference counts are maintained for small numbers or literal atoms
	references from the stack are not counted

Reference counts are maintained in a separate table from the data being counted.
The table can be thought of as a hash table. In addition, the "default" entry in
the table is reference count = 1, so that in the "normal" case, there is no table
entry for a particular datum.

"Garbage collection" then consists of (a) sweeping the stack, marking data with a
"referenced from the stack" bit in the reference count table if necessary, (b)
sweeping the reference count table, collecting those data whose reference counts
are 0 and which are not referenced from the stack.

--------------

Because of this scheme, it is very difficult to measure performance of Interlisp-D
independent of garbage collection, because the overhead for garbage collection is
distributed widely (although the timing for the sweep phase can be separated
out).

Secondly, the choice of a reference count scheme over the traditional
chase-and-mark scheme used by most Lisps was conditioned by the belief that
with very large virtual address spaces, it was unreasonable to require touching
all active storage before any garbage could be collected.

This would indicate that any timings should take into consideration paging
performance as well as garbage collection overhead, if they are to accurately
consider the overall performance picture.

Larry

∂16-Mar-81  1429	HEDRICK at RUTGERS 	Re: Solicitation    
Date: 16 Mar 1981 1725-EST
From: HEDRICK at RUTGERS
Subject: Re: Solicitation  
To: RPG at SU-AI
cc: lispsources at SU-AI
In-Reply-To: Your message of 16-Mar-81 1526-EST

ELISP: extended R/UCI lisp.  This will be a reimplementation of
Rutgers/UCI lisp for Tops-20 using extended (30-bit) addressing. It is
implemented using typed pointers and a copying GC, but will otherwise be
almost exactly the same as R/UCI lisp (unless you are accustomed to
CDR'ing into the innards of strings, etc.).
  hardware - Model B KL processor or Jupiter.  I am not clear whether
	a 2020 has extended addressing.  If so that would also be
	usable.
  OS - Tops-20, release 5 or later (release 4 useable with minimal
	patching)
  binding type- shallow dynamic, with same stack mechanisms as
	UCI Lisp
  compiler - Utah standard lisp transported to our environment

At the moment performance appears to be the same as R/UCI Lisp, except
that the GC takes about twice as long for a given number of CONS cells
in use.  The time per CONS may be less for substantial programs, since
we can afford to run with lots of free space, whereas our big programs
are pushing address space, and may not be able to have much free space,
hence GC a lot.

At the moment I have an interpreter that does a substantial part of Lisp
1.6.  I hope to finish Lisp 1.6 by the beginning of the summer.  I also
hope to have a compiler by then.  I am doing the interpreter personally,
and one of my staff is doing the compiler.  I am implementing R/UCI
lisp roughly in historical order, i.e. Lisp 1.6 first, then UCI lisp,
then Rutgers changes, though a few later features are slipping in (and
I am not doing anything I will have to undo).

Note that I have little if any interest in performance.  I want to match
R/UCI lisp, since users may complain if things suddenly slow down, but
that is about it.  I am more concerned about reliability (since I will
have little time to maintain it) and how long it takes to write it
(since I have little time to write it).  Our users are doing completely
traditional Lisp work, and have little or no interest in more flexible
binding or control semantics (we supplied a version of R/UCI lisp with
Scheme semantics, and no one was interested), nor in speed in
arithmetic.  The system is designed to be modular enough that
improvements can be done as needed.  I am giving some thought to
transportability, though not as much as the Utah folks. I think we
should be able to transport it to a system with at least 16 AC's and a
reasonable instruction set (e.g. VAX) with 2 man-months or less.

As far as the hardware we have available for testing, we will shortly
have 1M of MOS memory, 4 RP06's on 2 channel, and a model B KL processor
(the model matters since the model B is faster than the model A.  Note
that the processor model number is almost the only variable you care
about in a 20, but it is not derivable from the DEC marketing
designation, since a 2050 or 2040 may be either model.  However a 2060
is always model B).
-------

∂16-Mar-81  1433	HEDRICK at RUTGERS 	Re: GC    
Date: 16 Mar 1981 1728-EST
From: HEDRICK at RUTGERS
Subject: Re: GC  
To: RPG at SU-AI
cc: lisptranslators at SU-AI
In-Reply-To: Your message of 16-Mar-81 1534-EST

; the garbage collector.  its init routine is called gcinit and
; takes these args:
;   - the beginning of constant data space, which is really at the
;	start of the first of the two data spaces
;   - the first word beyond the constant data space, which is the
;	beginning of the usable part of the first data space
;   - the start of the second data space
;   - the first word beyond the second data space
	; garbage collector variables:
	;free - last used location in data space
	;lastl - last legal location in this data space - 1.  Trigger a GC if
	;   someone tries to go beyond this.  
	;stthis - start of this data space
	;enthis - end of this data space
	;stthat - start of other data space
	;enthat - end of other data space
	;stcnst - start of constant space
	;encnst - end of constant space

	.scalar lastl,stthis,enthis,stthat,enthat,stcnst,encnst

freesz==200000	;amount of free space at end of GC

   <<<initialization code omitted>>>


;This is a copying GC, modelled after the Lisp Machine GC, as
;described in Henry Baker's thesis.  There are two data spaces, old and new.
;A GC copies everything that is in use from old to new, and makes new the
;current one.  The main operation is translating objects.  If the object
;is absolute, e.g. an INUM, this is a no-op.  Only pointers into the old
;space are translated.  They are translated by finding the equivalent object
;in the new space, and using its pointer.  There are two cases:
;  - we have already moved the object.  In this case the first entry of
;	the old space copy is a pointer to the copy in new space.  These
;	pointers have the sign bit on, for easy detection.
;  - we have not moved the object.  In this case, we copy it to the end of
;	new space, and use the pointer to the beginning of this copy.
;At any given time, we have a pointer into new space.  Everything before
;this pointer has been translated.   Everything after it has not.  We also
;have to translate the stack and the constant area.  Indeed it is translating
;these areas that first puts something into new space to translate.

mark==400000,,0		;bit that says this has already been translated

;Because there are four different areas to translate, we have a separate
;routine to do the translation.
;  gctran:
;	w3 - first address to be translated.  W2 is updated, and is the
;		pointer mentioned above.  I.e. everything before W2 has
;		been translated
;	w4 - last address to be translated.

;The code within gctran avoids the use of the stacks, in order to avoid
;performance problems because of addressing conflicts between the stack
;and the areas being GC'ed.

gctran:	move o1,(w3)		;o1 - thing to be translated
	gettyp o1		;see what we have
	xct trntab(w2)		;translate depending upon type
	camge w3,w4		;see if done
	aoja w3,gctran		;no - next
	ret

;GCTRAX - special version of the above for doing new space.  Ends when
;we reach the free pointer
gctrax:	move o1,(w3)		;o1 - thing to be translated
	gettyp o1		;see what we have
	xct trntab(w2)		;translate depending upon type
	camge w3,free		;see if done
	aoja w3,gctrax		;no - next
	ret

;;TYPES
trntab:	jsp w2,cpyatm		; atom
	jfcl			;  constant atom
	jsp w2,cpycon		; cons
	jfcl			;  constant cons
	jsp w2,cpystr		; string
	jfcl			;  constant string
	jsp w2,cpychn		; channel
	jfcl			;  constant channel
	jfcl			; integer
	jsp w2,cpyrea		; real
	jrst 4,.		; hunk
	jfcl			; address
	jsp w2,cpyspc		; special

;here to translate a CONS cell - normally we copy it and use addr of new copy
cpycon:	skipge o2,(o1)		;do we already have a translation in old copy?
	jrst havcon		;yes - use it
	dmove o2,(o1)		;copy it
	dmovem o2,1(free)
	xmovei o2,1(free)	;make address into CONS pointer
	tlo o2,(object(ty%con,0))
	movem o2,(w3)		;put it in place to be translated
	tlc o2,(mark\object(ty%con,0)) ;make a pointer to put into old copy
	movem o2,(o1)		;and put it there
	addi free,2		;advance free list
	jrst (w2)

havcon:	tlc o2,(mark\object(ty%con,0)) ;turn into a real cons pointer
	movem o2,(w3)		;put in place to be translated
	jrst (w2)

  <<<the rest of the types are like unto this>>>
-------

∂16-Mar-81  1810	Scott.Fahlman at CMU-10A 	Re: GC   
Date: 16 March 1981 2109-EST (Monday)
From: Scott.Fahlman at CMU-10A
To: Dick Gabriel <RPG at SU-AI> 
Subject:  Re: GC
In-Reply-To:  Dick Gabriel's message of 16 Mar 81 15:34-EST
Message-Id: <16Mar81 210911 SF50@CMU-10A>


Dick,
I believe we gave you a copy of the SPice Lisp internals document?  If so,
our GC algorithm is described there.  We can run with GC turned off, though
we pay some overhead anyway.  If incremental GC is turned on, the cost is
so spread out that it would be impossible to separate.  Perhaps the only fair
thing to do, if the thingof interest ultimately is large AI jobs, is to run
big things only or smallthings enough times that a few GC will have happened.
Then you can just measure total runtime.
-- Scott

∂16-Mar-81  1934	PLATTS at WHARTON-10 ( Steve Platt) 	lisp -- my GC and machine specs  
Date: 16 Mar 1981 (Monday) 2232-EDT
From: PLATTS at WHARTON-10 ( Steve Platt)
Subject: lisp -- my GC and machine specs
To:   rpg at SU-AI

  Dick, just a reminder about this all...  it is all describing a
lisp for Z80 I'd like to benchmark out of curiosity's sake.
  1) All times will have to be done via stopwatch.  I might write a
quick (DO <n> <expr>) to repeat evaluation oh, say, 100 times or so
for better watch resolution.  GC time will *have* to be included
as I don't seperate it out.
  2) I plan to be speaking to John Allen about his TLC lisp -- as there;s
probably much similarity, I'd like to benchmark his at the same time.
I'll be sending him a copy of this letter.
 
  3) GC is a simple mark'n'sweep.  At some future time, I might replace
this with a compressing algorithm, makes core-image saving simpler.
I GC cons cells and atom space, but not number or string space (number
space for bignums (>1000 hex or so, use pointers for small integers),
string space for pnames.)  Proper strings might be implemented in the
future sometime.
  4) Lisp is an unreleased CDL lisp, still under development.  It works
under CPM 1.4 or anything compatible with that, on a Z80.  CDL Lisp has
its roots in Maclisp, I guess you'd say.  Binding is deep.  Compiler?
Hah -- maybe after my dissertation is finished...  Macros -- the same.
I don't really view macros as essential, so they have a relatively low
priority... both have been thought about, but won't be benchmarkable.
  5) The hardware environment is relatively constrained.  48K physically
right now, may be up to 60K by benchmark time... (this figures into
roughly 8K free cells, the additional 12K will add 3K cells...)
No cache, only 2 8" floppies.  A typical "good" home system.
 
  After reading this all, it's probably relatively depressing when
compared to some of the major machines being benchmarked.  But it is
representative of the home computing environment...

  If you have any more specific questions, feel free to ask.

   -Steve Platt (Platts @ Wharton)

∂17-Mar-81  0745	Griss at UTAH-20 (Martin.Griss) 	Re: GC      
Date: 17 Mar 1981 0835-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Re: GC  
To: RPG at SU-AI
cc: Griss at UTAH-20
In-Reply-To: Your message of 16-Mar-81 1334-MST

Standard LISP runs on a variety of machines, with existing LISPs, each with
a different GC; we will choose a machine set, and briefly decsribe;

What is standard AA analysis???
M
-------

∂17-Mar-81  0837	Robert S. Boyer <BOYER at SRI-CSL> 	Solicitation  
Date: 17 March 1981  08:34-PST (Tuesday)
From: Robert S. Boyer <BOYER at SRI-CSL>
To:   Dick Gabriel <RPG at SU-AI>
Cc:   Boyer at SRI-CSL
Subject: Solicitation  

The machine on which I can run LISP timings is a Foonly F2,
which emulates a DEC KA processor and a BBN pager, and runs
a variant of Tenex called Foonex.  It has 1/2 million words
of 500 nanosecond memory, no cache, no drum, and a CDC
Winchester disk.

I have used Interlisp extensively, but I haven't studied the
compiler output or MACRO sources enough to claim expertese
at optimal coding.

I am marginally familiar with Maclisp now and I plan to
become more familiar soon.

For the purpose of getting a complete set of F2 vs. 2060
timings, I'd be willing to run tests of other PDP-10 LISPs
that are Tenex compatible, provided the tests can be
performed without too much understanding of the LISP
variants.

I have a benchmark that J Moore and I constructed a few
months ago to compare Interlisp and Maclisp.  The files on
ARPANET host CSL named <BOYER>IREWRITE and <BOYER>MREWRITE
contain, respectively, Interlisp and Maclisp code for a far
from optimal rewrite style theorem prover.  (To FTP log in
as Anonymous, password foo.)  MREWRITE is coded so that,
except for the statistics gathering, it is also in Franz
LISP.  To start, you invoke (SETUP).  Then run (TEST), as
many times as you want.  TEST returns some statistics -- but
I assume that RPG will want to standardize here.  (TEST)
turns over storage very rapidly, recurses a lot, does very
little arithmetic, and engages in no fancy structuring (e.g.
RPLACs).  Our intention in coding TEST was to produce
quickly a small facsimile of the heart of our rather large
theorem-proving system in order to compare LISP times.

By intentionally coding a program that would be easy to
translate from Interlisp to Maclisp, we did injustice to
both LISPs.  For example, we used recursion where we might
have used the I.S.OPR construct in Interlisp or the DO
construct in Maclisp -- or a MAP construct in either.

∂17-Mar-81  0847	Robert S. Boyer <BOYER at SRI-CSL> 	LISP Timings  
Date: 17 March 1981  08:43-PST (Tuesday)
From: Robert S. Boyer <BOYER at SRI-CSL>
To:   RPG at SU-AI
Subject:  LISP Timings
cc:   Boyer at SRI-CSL

Could we include a cost column in the final grand tally?  It
has been remarked that many people are trying to decide
which LISP system to use, now and in the future.  Cost will
be an important criterion.  Maintenance charges should be
included since over the life of a machine, they may approach
the purchase price.  It should be relatively easy for each
person who voluteers a machine to indicate the purchase
price and maintenance charges.

∂17-Mar-81  1155	Masinter at PARC-MAXC 	Re: GC 
Date: 17 Mar 1981 11:54 PST
From: Masinter at PARC-MAXC
Subject: Re: GC
In-reply-to: RPG's message of 16 Mar 1981 1234-PST
To: Dick Gabriel <RPG at SU-AI>
cc: LispTiming@su-ai, LispTranslators at SU-AI

Interlisp-D uses a reference-count garbage collection scheme. Thus, "garbage
collection" overhead is distributed to those functions which can modify reference
counts (CONS, RPLACA, etc.) with the following important exceptions:

	no reference counts are maintained for small numbers or literal atoms
	references from the stack are not counted

Reference counts are maintained in a separate table from the data being counted.
The table can be thought of as a hash table. In addition, the "default" entry in
the table is reference count = 1, so that in the "normal" case, there is no table
entry for a particular datum.

"Garbage collection" then consists of (a) sweeping the stack, marking data with a
"referenced from the stack" bit in the reference count table if necessary, (b)
sweeping the reference count table, collecting those data whose reference counts
are 0 and which are not referenced from the stack.

--------------

Because of this scheme, it is very difficult to measure performance of Interlisp-D
independent of garbage collection, because the overhead for garbage collection is
distributed widely (although the timing for the sweep phase can be separated
out).

Secondly, the choice of a reference count scheme over the traditional
chase-and-mark scheme used by most Lisps was conditioned by the belief that
with very large virtual address spaces, it was unreasonable to require touching
all active storage before any garbage could be collected.

This would indicate that any timings should take into consideration paging
performance as well as garbage collection overhead, if they are to accurately
consider the overall performance picture.

Larry

p
∂17-Mar-81  1218	RPG  	Bureaucracy   
To:   lisptiming at SU-AI   
In sending mesages around, the following facts are useful:
	RPG is on LISPSOURCES which is equal to
LISPTRANSLATORS, which is a subset of LISPTIMING.

So there is no need to send me a copy of everything, nor
is it necessary to have LISPTIMING and LISPSOURCES on the same
header, for example. Thanks.
			-rpg-

∂17-Mar-81  1921	Bernard S. Greenberg       <Greenberg at MIT-Multics> 	Re: Solicitation    
Date:     17 March 1981 2142-est
From:     Bernard S. Greenberg       <Greenberg at MIT-Multics>
Subject:  Re: Solicitation
To:       lispsources at SU-AI
Cc:       Multics-Lisp-people at MIT-MC

Well, Multics MacLisp, letsee:

Multics Maclisp, consisting of an interpreter, compiler, LAP (not used
by the compiler, tho), runtime, and utilities, was developed by
MIT Lab for Computer Science (LCS) in 1973 with the aim of exporting
the Macsyma math system to Multics (of which MIT-Multics was the only
one at the time).  Dave Reed (now at LCS) and Dave Moon (now at MIT-AI
and Symbolics, Inc.) were the principal implementors then, and
Alex Sunguroff (don't know where he is now) to a lesser degree.
Reed and Moon maintained it to 1976, I maintained it until now.
Its maintenance/support status since my flushance of Honeywell
(December 1980) is now up in the air, although Peter Krupp
at Honeywell is now nominally maintainer.

The interpreter and general scheme of things were developed partly
on the experience of PDP-10 Maclisp, visavis running out of space,
and an earlier Multics Lisp by Reed, visavis better ways to do this
on Multics.   Multics MacLisp features virtually infinite address
space (limited by the size of a Multics Process directory, which
is virtually unlimited), a relocating/copying garbage collector,
strings, bignums and other MacLisp features, general compatibility
with (ITS) MacLisp, and very significantly, the facility to interface
to procedures in other languages (including Multics System routines)
on Multics.

With the notable exception of the compiler, which is a large (and
understandable, as well as effective) Lisp program of two large
source files, the system is in PL/I and Multics assembler: the
assembler portions, including notably the evaluator, are that
way for speed.  The language was designed to be as close to
ITS Maclisp as possible at the time (1973), but has diverged some.
The compiler was developed as two modules, a semantics pass
reworked from the then-current version of the fearsome ITS
COMPLR/NCOMPLR (1973), and the code generator was written anew
by Reed (1973), although it uses NCOMPLR-like strategies
(I have a paper on this subject).

Although used in the support of Macsyma, the largest and most important
use of Multics Maclisp is as the implementation and extension language
of the Multics Emacs "text processing and video process management"
system.  Other large subsystems in Multics Maclisp over the years
have included a Multics crash and problem analysis subsystem and
a management-data modeling system (KOMS, about which I know little).

Pointers in Multics Maclisp are 72-bit, which includes a 9-bit
type field.  Non-bignum numbers (fixna and flona) are directly
encoded in the pointer, and do not require allocation, or the
hirsute "PDLNMK" scheme of ITS MacLisp. Symbols and strings are
allocated contiguously, and relocated at garbage-collect time.
Binding is the standard MacLisp shallow-binding (old values
saved on PDL, symbol contains "current" value).  Other Maclisp
language accoutrements (property lists, functional properties,
MacLisp macros, etc.) exist.

"A description of my OS:"

Well, the Multics Operating System enjoys/suffers a paged,
segmented virtual memory, implementing virtual storage and virtual
file access in a unified fashion. The paradigm is so well-known
that I cannot bear to belabor it any more.  The net effect
on Lisp is a huge address space, and heavy interaction
between the GC algorithm and performance.  Multics will run
in any size memory between 256K words and 16 million (36 bit
words) The Multics at MIT (there are about three dozen multices
all over the world now) has 3 million words of memory,
which I believe is 1 microsecond MOS. The MIT configuration runs
3 cpus - other sites vary between 1 and 5.  The cache per
CPU is 2k words, and is "very fast", but the system gets CPU limited,
and can rarely exceed 1 MIP per cpu (highly asynchrounous processor),
although powerful character and bit string handling instructions
can do a lot faster work than a 1 mip load/store chain.  You
wanted to know a bout disks:

     Date:  16 March 1981 22:54 est
     From:  Sibert (W. Olin Sibert)

     An MSU0451 has 814 cylinders, of 47 records each. Its average seek time
     is 25 ms. (I don't know whether that's track-to-track, 10 percent, or
     half platter -- I'll bet it's track-to-track, though). Its average
     rotational latency is 8.33 ms. Its transfer rate is about 690K 8bit
     bytes (614K 9bit bytes) per second, or 6.7 ms. per Multics record.
     [1024 words]

I cannot really think of benchmark possibilities that would
show the performance of Multics MacLisp to great advantage.
For all its virtual memory, the antiquated basic architecture
of the Honeywell 6000 series (from the GE600) provides a
hostile environment to the Lisp implementor.  Only one register
(AQ) capable of holding a full Lisp pointer exists, and this
same register is the only one you can calculate in, either.
Thus, the compiler can't do useful register optimization
or store-aviodance, and comes nowhere near NCOMPLR, which
is using the same techniques to implement the same language,
in the performance of its object code.
MacLisp type and array declarations are supported, and utilized
in the straightforward way by the compiler to improve generated code,
but in no way could it be claimed that what it generates is
competitive.

Multics MacLisp is "owned by MIT. It is distributed by MIT to anyone
who wants.  It is part of some Honeywell products [Emacs], and is
supported by Honeywell to the extent and only to the extent necessary
to keep these products operative. Honeywell can distribute it,
but may not charge for it, but may charge for products written it it".
Although its support is a current hot potato, interest in using
Multics Maclisp is continually growing, and interesting subsystems
in it are being developed as of this writing.

Anything else?

∂31-Mar-81  1451	RPG  	Timing Benchmarks  
To:   lisptiming at SU-AI   
Since I haven't gotten much in the way of volunteered benchmarks
yet, I propose to begin to write some myself and with the help of
some of you. Here's the initial list of things I want to test the
speed of:

	Array reference and storage (random access)
	Array reference and storage (matrix access)
	Array reference and storage (matrix inversion)
	Short list structure (records, hunks...)
	Long list structure (cdr access)
	CAR heavy structures
	CDR heavy structures
	Interpreted function calls
	Compiled function calls
	Smashed function calls
	Table function calls (FUNCALL, SUBRCALL)
	Tail recursion (?)
	Block compiling
	Reader speed
	Property list structures
	Atom structures (saturated obarrays)
	Internal loops
	Trigonometric functions
	Arithmetic (floating and fixed)
	Special variable lookup
	Local variable lookup
	CONS time
	GC time
	Compiled code load time
	EQ test time
	Arithmetic predicates
	Type determination

Suggestions solicited.
			-rpg-

∂01-Apr-81  1550	Masinter at PARC-MAXC    
Date: 1 Apr 1981 15:49 PST
From: Masinter at PARC-MAXC
To: LispTiming at SU-AI

These are numbers that I generated in late 1977, measuring instruction 
counts for various Interlisp-10 operations. One thing to be careful of in
measuring Interlisp-10 is to watch whether the functions are swapped
or not.... it makes a big difference. I suggest Interlisp-10 timings should
be made (at least once) with NOSWAPFLG set to T before the timed
program is loaded in.

------ begin forwarded message -------

I have just made some measurments of how many instructions it takes 
to do various things in LISP, and I thought they might be of general 
interest.

All measurements are in number of PDP-10 instructions, taking
no account of the relative speed of those instructions.
Measurements for Maxc (which has some special PDP-10 mode
instructions to improve function call performance) are given in
parentheses.

To call a non-swapped compiled function (not in a block) which has
all of its args LOCALVARS takes 50 instructions. (28 on Maxc)
{note that in this and subsequent figures, the time "to call" something
also includes the time to return from it}

To call a SUBR of 1 argument takes 56 instructions. (30 on maxc)
To call a function in the same block where the called function
has all of its args LOCALVARS takes 4 instructions + 1 for each
formal argument.

If the called function has any of its arguments SPECVARS then
it takes 57 instructions plus 12 for each SPECVAR arg and 2 for
each non-specvar arg. To bind variables with a PROG is roughly
the same. (this is 25+9/specvar on Maxc)

Block entry takes 69 instructions, i.e. (BLOCK (FOO FOO)) then
to call FOO will take 19 more instructions than (BLOCKS (NIL FOO
(LOCALVARS . T)))
(this is 45 on Maxc, i.e. about 17 more for block entry).
{you want to do the former if FOO calls itself recursively, though}.

To do a BLKAPPLY* takes 80 instructions + 3 per entry on blkapplyfns
which must be skipped (i.e. if you BLKAPPLY 'FOO and FOO is the
third entry on BLKAPLYFNS then this is 6 extra instructions).
(same on Maxc)

To call a SWAPPED function takes at least 86 additional instructions
per call. This is independent of whether the called function is
a block or a simple function, etc.

A LINKED function call takes 10 more instructions than a non-linked
function call. You should therefore always put (NOLINKFNS . T) 
in your blocks declaration unless you have a specific
reason for wanting the calls linked.

∂05-Apr-81  2141	JHL   via LONDON    
To:   lisptiming at SU-AI   
how about including:
environmet switching (stack-groups, eval with alist (ptr),
	closure, etc)
variable lookup  within switched environment
primitives on strings, bits, bytes seem to be missing

∂05-Apr-81  2217	Carl Hewitt <CARL at MIT-AI> 	Lisp Timing Mailing List 
Date: 6 April 1981 01:08-EST
From: Carl Hewitt <CARL at MIT-AI>
Subject:  Lisp Timing Mailing List
To: RPG at SU-AI
cc: HEWITT at MIT-AI, " @LSPTIM.DIS[P,DOC]" at SU-AI

Dick,

Please change my name on the mailing list to CARL-JUNK
so that receiving mail doesn't interrupt me on line.

Thanks,

Carl

∂06-Apr-81  1302	RPG  	Timing benchmark   
To:   "@LSPTRN.DIS[P,DOC]" at SU-AI   
The following is the first of the timing benchmarks to be tried.  As such
it is fairly simple. It is simply a combinatorial pairing function that
takes 2 sets (represented as lists), a matching function, and some other
constraints, and produces a set of possible pairings of items from each
set. The example below produces 2592 possible pairings. I've included the
entire testing program, which is at the bottom, along with the test data,
which are stored in global variables. Below is reproduced the timings from
the SAIL KL running with a load average of .75. The output of the test
program is the number of pairings, the runtime (EBOX time on the KL), and
the gctime (EBOX time).  The first run involves COREing up in response to
list space exhaustion, which results in the large gctime for the first
run: the other runs are in the resulting core images. I suggest you also
run it a few times to get a stable set of readings.

It would be nice to get some results from you for the LISP meeting at SRI
on wednesday, but that may not be possible.
			-rpg-
2592 
(RUNTIME 1.948) 	;in seconds
(GCTIME 29.711) 

2592 
(RUNTIME 1.887) 
(GCTIME 2.599) 

2592 
(RUNTIME 1.886) 
(GCTIME 2.565) 

2592 
(RUNTIME 1.892) 
(GCTIME 1.922) 

2592 
(RUNTIME 1.895) 
(GCTIME 1.973) 

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(DEFUN PAIRS (X Y MUST-APPEAR FUN APPLY-CONSTRAINTS CONSTRAINTS
	      NIL-PAIRS) 
       ((LAMBDA (XXX) 
	 (MAPCAN 
	  (FUNCTION(LAMBDA (I) 
	      (PROGN
	       (COND
		(MUST-APPEAR
		 (*CATCH
		  'OUT
		  (PROGN
		   (MAPC 
		    (FUNCTION(LAMBDA (I) (COND ((MEMBER (CDR I) MUST-APPEAR)
					 (*THROW 'OUT T)))))
		    I)
		   NIL)))
		(T)))
	      (LIST I)))
	  XXX)) 
	(MAPCAR (FUNCTION(LAMBDA (I) (CDR I)))
		(COND ((< (LENGTH X)
			  (+ (COND (NIL-PAIRS 1) (T 0)) (LENGTH Y)))
		       (PAIRS1 (MAKE-POSSIBILITY-1 X
						   Y
						   FUN
						   APPLY-CONSTRAINTS
						   CONSTRAINTS
						   NIL-PAIRS)))
		      (T (PAIRS2 (MAKE-POSSIBILITY-2 Y
						     X
						     FUN
						     APPLY-CONSTRAINTS
						     CONSTRAINTS
						     NIL-PAIRS)))))))


(DEFUN MAKE-POSSIBILITY-1 (X Y FUN APPLY-CONSTRAINTS CONSTRAINTS
			   NIL-PAIRS) 
       ((LAMBDA (N) 
	 ((LAMBDA (Q) 
	   (COND
	    (NIL-PAIRS (MAPC (FUNCTION(LAMBDA (I) (RPLACD I
						   (LIST* '(NIL)
							  (CDR I)))))
			     Q))
	    (Q)))
	  (MAPCAN 
	   (FUNCTION(LAMBDA (I) 
	      (PROGN
	       (SETQ N 0)
	       ((LAMBDA (A) (AND A
				 (OR (NULL CONSTRAINTS)
				     (NULL APPLY-CONSTRAINTS)
				     (FUNCALL APPLY-CONSTRAINTS
					      CONSTRAINTS))
				 (LIST (LIST* I A))))
		(MAPCAN 
		 (FUNCTION(LAMBDA (J) ((LAMBDA (Q) (COND (Q (NCONS Q))))
				(PROGN (SETQ N (1+ N))
				       (COND ((OR (NULL FUN)
						  (FUNCALL FUN I J))
					      (LIST* N J)))))))
		 Y)))))
	   X)))
	0))


(DEFUN MAKE-POSSIBILITY-2 (X Y FUN APPLY-CONSTRAINTS CONSTRAINTS
			   NIL-PAIRS) 
       ((LAMBDA (N) 
	 ((LAMBDA (Q) 
	   (COND
	    (NIL-PAIRS (MAPC (FUNCTION(LAMBDA (I) (RPLACD I
						   (LIST* '(NIL)
							  (CDR I)))))
			     Q))
	    (Q)))
	  (MAPCAN 
	   (FUNCTION(LAMBDA (I) 
	      (PROGN
	       (SETQ N 0)
	       ((LAMBDA (A) (AND A
				 (OR (NULL CONSTRAINTS)
				     (NULL APPLY-CONSTRAINTS)
				     (FUNCALL APPLY-CONSTRAINTS
					      CONSTRAINTS))
				 (LIST (LIST* I A))))
		(MAPCAN 
		 (FUNCTION(LAMBDA (J) ((LAMBDA (Q) (COND (Q (NCONS Q))))
				(PROGN (SETQ N (1+ N))
				       (COND ((OR (NULL FUN)
						  (FUNCALL FUN J I))
					      (LIST* N J)))))))
		 Y)))))
	   X)))
	0))


(DEFUN PAIRS1 (L) 
       (COND
	((NULL L) '((NIL)))
	(T
	 ((LAMBDA (CAND POSS) 
	   (MAPCAN 
	    (FUNCTION(LAMBDA (PAIRS) 
	       (PROGN
		((LAMBDA (AVOID ANS) 
		  (MAPCAN 
		   (FUNCTION(LAMBDA (I) 
			     ((LAMBDA (Q) (COND (Q (NCONS Q))))
			      (PROGN (COND ((CAR (MEMBER (CAR I)
							 AVOID))
					    (LIST* AVOID ANS))
					   (T (LIST* (LIST* (CAR I)
							    AVOID)
						     (LIST* CAND
							    (CDR I))
						     ANS)))))))
		   POSS))
		 (CAR PAIRS)
		 (CDR PAIRS)))))
	    (PAIRS1 (CDR L))))
	  (CAAR L)
	  (CDAR L)))))


(DEFUN PAIRS2 (L) 
       (COND
	((NULL L) '((NIL)))
	(T
	 ((LAMBDA (CAND POSS) 
	   (MAPCAN 
	    (FUNCTION(LAMBDA (PAIRS) 
	       (PROGN
		((LAMBDA (AVOID ANS) 
		  (MAPCAN 
		   (FUNCTION(LAMBDA (I) 
			     ((LAMBDA (Q) (COND (Q (NCONS Q))))
			      (PROGN (COND ((CAR (MEMBER (CAR I)
							 AVOID))
					    (LIST* AVOID ANS))
					   (T (LIST* (LIST* (CAR I)
							    AVOID)
						     (LIST* (CDR I)
							    CAND)
						     ANS)))))))
		   POSS))
		 (CAR PAIRS)
		 (CDR PAIRS))))) 
	    (PAIRS2 (CDR L))))
	  (CAAR L)
	  (CDAR L)))))

(declare (special a b))
(setq a '(
	  (1 2)
	  (7 8)
	  (9 0)
	  (a b c)
	  (a b c)
	  (d e f)
	  (d e f)
	  (g h i)
	  (g h i)
	  (j k l)
	  (m n o)
	  (p q r)
	  ))
(setq b '(
	  (a b c)
	  (j k l)
	  (d e f)
	  (p q r)
	  (g h i)
	  (9 0)
	  (a b c)
	  (p q r)
	  (7 8)
	  (j k l)
	  (2 1)
	  (3 2)
	  (8 7)
	  (9 8)
	  (0 9)
	  (m n o)
	  (d e f)
	  (j k l)
	  (m n o)
	  (d e f)
	  (p q r)
	  (g h i)
	  ))

(defun test ()
 ((lambda (t1 x gt)
	  (setq x (pairs a b () 'equal () () ()))
	  (setq t1 (- (runtime) t1))
	  (setq gt (- (status gctime) gt))
	  (print (length x))
	  (print (list 'runtime
		       (QUOTIENT (FLOAT  (- t1 gt))
				 1000000.)))
	  (print (list 'gctime
		       (quotient (float gt) 1000000.))))
  (runtime) ()(status gctime)))

∂06-Apr-81  2007	RPG  
To:   lisptranslators at SU-AI   
Here's the real timing stuff. As I pointed out, the old RUNTIME
(which is what RUNTIME in MacLisp here gives) is EBOX time (excluding
all memory time). Also, COMPILE the functions. The initial run
on SAIL does many CORE UUOs, which are some page creation stuff on ITS,
so it may run very slowly. The total runtime here at SAIL is given
by adding the GCTIME and WTIME entries below. 

This program is henceforth called: ``SAIL constraint combinatorial pairing
program'' or SCCPP.
			-rpg-
RUNTIME = EBOX time
WTIME = EBOX  + memory time (approx.)

2592 			;number of pairings
(RUNTIME 1.969) 	;EBOX time in seconds
(GCTIME 29.914) 	;GCTIME in seconds
(WTIME 25.8693333) 	;EBOX + MEMORY time in seconds

2592 
(RUNTIME 1.903) 
(GCTIME 2.616) 
(WTIME 5.334) 

2592 
(RUNTIME 2.008) 
(GCTIME 2.61) 
(WTIME 5.59000003) 

2592 
(RUNTIME 1.959) 
(GCTIME 2.065) 
(WTIME 5.86833334) 

2592 
(RUNTIME 1.904) 
(GCTIME 1.921) 
(WTIME 5.1623333) 

∂05-Apr-81  0208	H at MIT-AI (Jack Holloway) 	lisp timings    
Date:  5 APR 1981 0508-EST
From: H at MIT-AI (Jack Holloway)
Subject: lisp timings
To: rpg at SU-AI

Out of curiosity, you might try to get Craig Reynolds at III
to run some timings of Maclisp on the Foonly F-1.  The machine
is roughly 2 to 2.5 times a KL-10 for some things.
I'm not sure he has a net address, but you could get
in contact with him thru Dave Dyer at ISI.

∂06-Apr-81  1410	HEDRICK at RUTGERS 	Re: Timing benchmark     
Date:  6 Apr 1981 1701-EST
From: HEDRICK at RUTGERS
Subject: Re: Timing benchmark   
To: RPG at SU-AI
In-Reply-To: Your message of 6-Apr-81 1602-EST

Any chance I could get you to stick to functions defined in McCarthy?
I can guess what most of them are, but it would be better no to have
to guess.  If you would like to send me a copy of the Maclisp manual
(if there is a Maclisp manual), that would be OK, too.

Also, when you say E-box time, what do you mean?  If you mean E-box
ticks, converting those to time is non-straightforward.  What conversion
do you use?  If that is what you want, it will favor Elisp, since
using E-box ticks will eliminate the overhead due to pager refills,
which is what slows us down compared to non-extended Lisp.

I will try to figure out your functions and run them tonight.  However
I do not keep separate GC timing yet, so you probably won't get that.
-------

1. I think it is specifically a bad idea to stick to McCarthy functions
because we are doing ``real programs''. For each non-standard function
we can provide a definite semantics; if your (generic) LISP cannot deal with
it, then that is good specific knowledge.

2. It is exactly the EBOX ticks, ignoring memory time. But we don't
page (or swap) so this isn't too outrageous. Currently there is
only one very informal way to measure memory time, which I will
include later on in the real timing results: what I sent was
simply an indication for you to judge whether your LISP is
running the problem correctly.

3. We will need to devise a separate GC time test. I provided it because it
was a superset of the information I'd like to get from all of you.
			-rpg-
∂06-Apr-81  1931	Bernard S. Greenberg       <Greenberg at MIT-Multics> 	Re: Timing benchmark
Date:     6 April 1981 2147-est
From:     Bernard S. Greenberg       <Greenberg at MIT-Multics>
Subject:  Re: Timing benchmark
To:       RPG at SU-AI
Cc:       "@LSPTRN.DIS[P,DOC]" at SU-AI

I ran this program three times on MIT-Multics. Heres what it said.

Runtime  (secs)   86.4  86.4  86.8
gctime   (secs)    2.4  3.8    2.5     (41 users out of max
                                        set at 120)

Before we accept that this implementation is really 60 times
slower than MACLISP on SAIL, I would like to point out that
the unmodified program, sent in the mail, also ran on AI-ITS
MACLISP, also compiled, for FIVE SOLID WALL-CLOCK MINUTES (as
it did on Multics) without opening its mouth, but I quit it
before it finished on AI (several times). The KA-10 is reported
to be 2 or 3 times slower (instruction rate) than Multics.
The KL-10 at sail, given all benefit of the doubt, is NOT
TWO ORDERS OF MAGNITUDE faster than the AI KA-10.

Has anyone else encountered difficulty in making this program run
in small number of seconds of CPU time?  Perhaps there is some
subtle conversion problem I missed in my perusal of this program?
(I made no modifications other than converting the characters to
lower-case, and on Multics, macroing *catch/*throw to catch/throw).

∂06-Apr-81  2008	HEDRICK at RUTGERS 	Re: Timing benchmark     
Date:  6 Apr 1981 2307-EST
From: HEDRICK at RUTGERS
Subject: Re: Timing benchmark   
To: RPG at SU-AI
In-Reply-To: Your message of 6-Apr-81 1602-EST

I translated your benchmark to R/UCI Lisp.  The only non-obvious
translation was due to the fact that in R/UCI Lisp (as in all Lisp 1.6
derivatives) there can only be 5 arguments to compiled functions.
Fortunately the arguments beyond the 4th were always NIL in your case,
so I just eliminated them.  (R/UCI Lisp has a special kind of function,
as LEXPR, that allows more than 5 arguments.  However its use did not
seem necessary in this case.)

Totals, including GC:

R/UCI Lisp (with free space expanded by 50K):
  interpreted:  15.0
  compiled:      4.6
  ", NOUUO:      3.2  (of which .6 is GC)

By NOUUO I mean that linkage between compiled functions is by direct
PUSHJ, not going through the intepreter.  This makes tracing and
breaking impossible.  (In Elisp we will use a protocol that does not
have this problem.)  Note that total runtimes are slightly better than
RPG's MACLisp timings.  However more of it is runtime and less is GC.
I conjecture that this will be typical of Lisp programs whose only
arithmetic involves small integers.  MACLisp will produce better
code for small integers, but will have to box and unbox them when returning
from functions or putting them into data structures, causing faster
runtime but more GC time. The first call is no different than others
because in R/UCI Lisp there is no automated expansion.  We had to
explicitly expand free storage by 50K before running.

Elisp (extended addressing Lisp) does not yet have a compiler.
Therefore some of the system functions (e.g. MEMBER, CADR) are running
interpreted.  This slows things down noticably.  To get an idea of how
Elisp is going to work out, I compared it with a copy of R/UCI Lisp in
which the same functions are being interpreted.  [The temptation is
great to simply code these things in assembly language, since there are
really only a few at this point.  However I will attempt to resist this
temptation and continue to compare Elisp with this semi-interpreted
R/UCI Lisp.]  Elisp uses dynamic expansion and contraction of memory.
However there is no apparent difference between the first time and
other times (possibly because memory has contracted to its initial
state by the end).

Elisp:
  interpreted:  28.5  (E-box time, as used by RPG, 20.1 sec.)

R/UCI Lisp (with the same functions interpreted):
  interpreted:  32.6

So Elisp is (as usual) a small amount faster than R/UCI Lisp.  This
suggests that a good prediction for the final version of Elisp is 14
sec. interpreted.  I am not ready to make predictions for Elisp compiled
code, as we don't know how the pager refill problem is going to affect
it.  My guess is that it will be slightly slower than R/UCI Lisp, with
slowdown due to pager refills (from extended addressing) somewhat offset
by the better code generated by Utah's compiler.

Note that I normally report "normal" CPU time, not E-box times as used
by RPG. The E-box times will be noticably smaller in the case of Elisp.
I regard the use of E-box time with Elisp as problematical, since it
is significantly smaller than conventional CPU time, even with 0 load
on the system.  I think this shows that E-box time omits some sort of
overhead that Elisp generates, probably pager refills (though the
documentation says that refills are not counted).  Until someone convinces
me otherwise, I will regard conventional CPU time as a better index of
Elisp's load on the system.  I report CPU times with fairly light (1 to 2)
load averages.

-------

∂06-Apr-81  2007	RPG  
To:   lisptranslators at SU-AI   
Here's the real timing stuff. As I pointed out, the old RUNTIME
(which is what RUNTIME in MacLisp here gives) is EBOX time (excluding
all memory time). Also, COMPILE the functions. The initial run
on SAIL does many CORE UUOs, which are some page creation stuff on ITS,
so it may run very slowly. The total runtime here at SAIL is given
by adding the GCTIME and WTIME entries below. 

This program is henceforth called: ``SAIL constraint combinatorial pairing
program'' or SCCPP.
			-rpg-
RUNTIME = EBOX time
WTIME = EBOX  + memory time (approx.)

2592 			;number of pairings
(RUNTIME 1.969) 	;EBOX time in seconds
(GCTIME 29.914) 	;GCTIME in seconds
(WTIME 25.8693333) 	;EBOX + MEMORY time in seconds

2592 
(RUNTIME 1.903) 
(GCTIME 2.616) 
(WTIME 5.334) 

2592 
(RUNTIME 2.008) 
(GCTIME 2.61) 
(WTIME 5.59000003) 

2592 
(RUNTIME 1.959) 
(GCTIME 2.065) 
(WTIME 5.86833334) 

2592 
(RUNTIME 1.904) 
(GCTIME 1.921) 
(WTIME 5.1623333) 

∂07-Apr-81  0924	RPG  	Rules    
To:   lisptiming at SU-AI   
I have sent out the first benchmark, and already there are a number
of issues that need to be faced. First is that some systems (SAIL,
for instance) only reliably report EBOX time (excluding memory
time). Fortunately SAIL does make available some form of EBOX + MBOX
time, though it is unreproducible. When possible I want to see the most
information you can give me with as many distinctions as possible. If your
system can give a breakdown of memory references, cache write-through time,
page fault time, EBOX time,... please give me that. When I send out the
benchmarks I include the SAIL times so that you can get some idea of how
long it all takes. From now on I will provide EBOX time, EBOX + MBOX time,
and GCTIME. Because I only provide that does not mean that is all I want to
see.

Slightly more importantly is the issue of `cheating'. If the sources of 
benchmarks wish to allow specializations of their programs to the test data,
they should make remarks to that effect. If someone cares to make such 
specializations they must be cleared by the author and me. This isn't because
I like to be in control so much as I want to understand what is being gained
by the specialization and what features of the target LISP make such 
specializations necessary and/or desirable.

For example, in the first benchmark several of the functions are implicit 
LEXPRs, which in MacLisp means that there are more than 5 arguments. This
means that the arguments are passed on the stack rather than through registers.
Since this takes longer than the register convention (in this case) I want that
feature timed. In the test data I sent out, some of the arguments are provably
constantly (). Chuck Hedrick at Rutgers (cleverly) noticed this and specialized
the functions. I want to specifically disallow that specialization (since the
LISP he had allows LEXPRs). [So do it again, Chuck.]

			-rpg-

∂07-Apr-81  1323	JONL at MIT-MC (Jon L White) 	Proposed ''mini'' benchmark, with interpretation. 
Date:  7 APR 1981 1611-EST
From: JONL at MIT-MC (Jon L White)
Subject: Proposed "mini" benchmark, with interpretation.
To: lisptiming at SU-AI

It hardly seems appropriate to run timing tests without including a word
or two about the discussion last fall which generated (in Masinter's words) 
so much more "heat" rather than "light", namely the TAK function sent my way 
in September 1980 by Mr. Shigeki Goto of the Electical Communication 
Laboratories, Nippon Telegraph and Telephone Co., in Tokyo.

  (DEFUN TAK (X Y Z)
	(COND ((GREATERP X Y)
	       (TAK (TAK (SUB1 X) Y Z) 
		    (TAK (SUB1 Y) Z X)
		    (TAK (SUB1 Z) X Y) ))
	      (T Y) ))
  The test case, named TARAI-4, is to measure the timing of (TAK 4 2 0)

The value of this trivial function can be seen, not in a competition
between lisps for "speed", nor in a  condemnation of one dialect for 
the "kludges" which must be performed in order to get such a trivial
thing to run reasonably fast, but rather in the analysis of the basic
issues which trying to time it brought out.  After receiving many responses 
from around the community,  I mailed out a note in which was discussed
what I thought were some fundamental issues, and I'd like to send that
note to this group for consideration.  The original note from Mr. Goto and 
a lengthy series of communications about the timings for his test case, 
especially from people in the Interlisp community, is in the file 
  JONL;GOTO NOTE
on the MIT-MC machine.

  Date: 22 October 1980 11:08-EDT
  From: Jon L White <JONL at MIT-MC>
  Subject: Response to Goto's lisp timings
  To: .  .  . 
      As Larry Masinter mentioned in his comment on the Goto
  timings, comparisons between LISPs are likely to generate 
  more heat than light;  but the replies did throw a little
  more light on some things, especially the runtime variabilities
  of an Interlisp program, and I thought I'd summarize them 
  and pass them along to the original recipients of the note.
  However, I'd like to say that the general concern with speed 
  which I've  encounterd in the past has been between MacLISP and 
  FORTRAN, rather than with some other lisp;  and several Japanese 
  research labs have done AI research still in FORTRAN.  
      Just in case you're put off by looking at even more meaningless
  statistics, I'd also like to aprise you that following the little 
  summary is a brief technical discussion of three relevant points 
  disclosed by the TAK function (out of the many possible points at 
  which to look).  These technical points may be new to some of you, 
  and even beyond the LISP question you may find them useful; the key 
  words are (1) UUOLINKS, (2) Uniform FIXNUM representation, and 
  (3) Automatic induction of helpful numeric declarations by a compiler.

      Almost everyone familiar with Interlisp recognized that
  the ECL people had not requested "block" compilation in the TARAI-4
  example, and several persons supplied results from various
  20/60's around:
				   default compilation 	rewritten code, with
	 correspondent 		       timings 		block-compiled timing
     Date: 19 OCT 1980 2127-PDT		  9.8ms		    1.8ms
    From: MASINTER at PARC-MAXC2
     Date: 20 Oct 1980 1039-PDT		  16.ms		    2.ms
    From: CSD.DEA at SU-SCORE (Doug Appelt)
     Date: 20 Oct 1980 at 2257-CDT	   0.83ms    (for UCILISP only)
    From: tyson at UTEXAS-11 
    <Goto's original timings on ICILISP>     2.9ms
    <Goto's original timings on Interlisp>  15.0ms
    <myself, for MacLISP on 20/50)	   0.556ms

  There seems to be some unexplained discrepancy between Tyson's timing 
  and that of Goto, as well as between Masinter's and Appelt's default-
  compilation timings; but the "best-possible" Interlisp timings for
  a re-written function (replacing GREATERP by IGREATERP) and using
  the "right" block declarations seem consistent at around 2ms.  Indeed,
  as Shostack suggest in his note of "20 Oct 1980 1036-PDT" there is
  quite a bit of variablity in timing Interlisp functions depending on
  just the right set of declarations etc (even for such a simple function).
      A point which, however, seems to be missed is that the notion of
  "block" compilation requires a decision at compile-time as to what 
  kind of function-linkage would be desirable (I presume that spaghetti-
  stack maintainence is the worst offender in runtime slowdown here);
  by comparison, the decision between fast and slow function linkage
  in MacLISP is made dynamically at runtime, so that only one kind of
  compilation be needed.  Indeed, by not burdening the novice with the
  understanding of yet one more inscrutable compilation question
  ("block" versus what?), the novice needn't be unduly penalized for
  not becoming a "hacker"; the above timings show a penalty of a factor 
  between 5 and 10 for ignoring, or under-utilizing, the "block" question.  


  (1) UUOLINKS:
      The following strategy, which we call the UUOLINKS hack, may have 
  first been introduced into the old LISP 1.6:  
      Arguments are set up for passing to a function and an instruction 
	  in a specially-managed hash table is XCT'd.
      In case a fast link is acceptable, the first usage of this linking
	  will convert the hash entry to a PUSHJ P,...  --  if not
	  acceptable, it will remain a slow interpretive route.
      Two copies of the hash-table are kept -- one is  never altered by
	  the linker, so that at any given point in time, all the "fast"
	  links may be restored to the unconverted slow interpretive route
	  (which may yet again be "snapped" to fast).
      Typically, a hash table size of 512. is satisfactory, but some
	  applications require 1024. or more (in particular, MACSYMA).
  Indeed as Boyer (@SRI-KL) mentioned in his note of "21 Oct 1980 2055-PDT", 
  the fast route -- whether by Interlisp's block compiler, or by MacLISP's
  runtime "snapper" -- does not leave much debugging help lying around
  on the stack; at least with this UUOLINKS approach, one can make the
  decision while in the middle of a debugging session, without penalty.
  The time cost of using the slow linkage seems to be a factor of between
  2 and 5.

  (2) Uniform FIXNUM representation
      Many years ago we changed MacLISP's representation of FIXNUM so
  that it would be uniform;  unlike the other PDP10 lisps with which I
  am familiar, we do not code some fixnums (small ones) as "immediate" 
  pointers and others (larger ones) as addresses.  Also, there is a
  read-only page or two which "caches" fixnum values of about -300. to
  +600., so that number consing of small numbers won't actually be 
  allocating new cells; e.g. interpreting a form like 
	  (DO I 0 (ADD1 I) (GREATERP I 100.) ...)
  Although I took a lot of flak for flushing the INUM scheme in favor
  of the uniform scheme, consider the advantage for compilation strategy,
  as seen in these representative code sequences for (IGREATERP X Y):
    INUM scheme:		MOVE A,-3(P)
			  JSP T,UNBOX
			  SAVE TT,somewhere
			  MOVE A,-4(P)
			  JSP T,UNBOX
			  CAME TT,somewhere
			  ...
    Uniform scheme:	MOVE TT,@-3(P)
			  CAME TT,@-4(P)
			  ...

  (3) Automatic induction of helpful numeric declarations by a compiler.
      As Masinter and Boyer pointed out, most Interlisp programmers 
  would probably be using "IGREATERP" rather than "GREATERP" (the MacLISP 
  correspondent is "<" ).  But a compiler can accumulate static information
  and do this change automatically;  at the very least, it could give
  out warning checks such as "Floating-point argument used with FIXNUM-only
  operation".  Providing the capability for compile-time typing of variables
  is probably the only way to meet the FORTRAN challenge -- which must be
  met since much useful AI research needs both symbolic and numeric
  capabilities.  Larry's MASTERSCOPE is a very similar sort of automated
  induction scheme.
 

∂10-Apr-81  1051	HEDRICK at RUTGERS 	Re: Rules      
Date: 10 Apr 1981 1301-EST
From: HEDRICK at RUTGERS
Subject: Re: Rules    
To: RPG at SU-AI
cc: lisptiming at SU-AI
In-Reply-To: Your message of 7-Apr-81 1224-EST

OK, but you will notice that I was also the only person who got the
thing done by the deadline you specified.  If we are going to end up
doing major conversions for each test, and furthermore if conversions
are going to have to be approved by you, you may find fewer volunteers
than originally planned. The reason for eliminating the extra args was
of course that turning the things into LEXPR's would be a pain in the
neck.  This is because in UCI Lisp LEXPR's do not refer to arguments by
name, but as (ARG x).  I can obviously write a program to do this, but
that was not feasible in the amount of time I had before the meeting.
Furthermore, it looked to me like the functions that had more than 5
arguments were used for preparing the data, but that only PAIRS1
and PAIRS2 were actually called large numbers of times.  Now that the
issue has been brought up, I will of course test LEXPR's, but would
be surprised if there is any change in performance.

Even if LEXPR's change the performance, I am not sure that is the right
way to do the conversion to UCI Lisp. It would be very unusual for a UCI
Lisp programmer to use an LEXPR in order to handle more than 5
arguments. They are normally used for indefinite arguments, when it
makes sense to number them.  If named variables are replaced with (ARG 1),
(ARG 2), ..., this obviously makes the program somewhat opaque.  Another
possible method is to lambda-bind variables to the required values and
refer to them globally.  While this is less than ideal, I claim that it
is better than (ARG n).  Making a list of the extra arguments is also
possible, but (CADR ARGS) is little better than (ARG 2).

At this point we start having to ask what the purpose of this project
is.  If it is to see how the dialects vary, then I claim that nothing is
accomplished by forcing us to convert into code that we would not in
fact write that way in the dialect. It seems to me that it is perfectly
legitimate for me to say that UCI Lisp simply does not support EXPR's
with more than 5 arguments, and that I would find some other way to do
my task.
-------

Groundrules (reprise)
I believe that at the outset of this project I stated what my goals
were, and the order of importance. But I will reiterate them anyway:

1. I want to provide for each major type operation (variable lookup,
function call,...) a relative, hopefully total, order on the various LISPs
and LISP systems around. I also hope that there is enough standardized
timing facilities around to at least get some idea of the approximate
absolute speeds as well.

2. I want to determine, myself, what the qualitative differences between
the various LISP systems around are. I hope to do this by watching how various
programs and constructs are handled by the translators. At first I hoped
that the ``translator volunteers'' would be just that, but now it seems I
will need to do most of the translations myself, which is ok as long as
I can merely provide the framework and not exact working code. If you
want a NIL/MacLisp person to propose the exact program whose timing is
universally reported as the performance of your favorite LISP system, then
I might be more willing to do everything myself.

3. Having ``rules'' is absolutely fair. First, one certainly cannot look
at specializations of a program to the data. Moreover, innate laziness (no
`major conversions' to quote Hedrick) dictates that the programs should be
examined as little as possible. Arguing style is totally irrelevant.
Suppose I wanted to test function call time and proposed factorial in its
recursive form, it is totally opposed to the spirit of what that tests to
translate it into an iterative program no matter what absolute style
considerations you bring to bear. One of the things I wanted to test 
with this program is how functions with more than 5 arguments behave
and are handled with different systems. This is totally fair.

As pointed out, the standard > 5 argument LEXPR conversion is

	(defun foo (a1 ... an) ...) =>
	(defun foo n ((lambda (a1 ... an) ...) (arg 0) ... (arg n)))

I was flip with Chuck because we spent 2 delightful years at Illinois
together a few years back.

4. Perhaps I should have sent out more help with this program, and
in the future I will, but another point of this benchmark was to test
the testing system. 

From now on I will provide with each benchmark a description of each
`primitive' in the program(s) that is not in McCarthy along with translation
tips for those facilities that are not universal. As time goes on and I
become truly familiar with all the systems, I will provide specific help
for each LISP.
			-rpg-
∂10-Apr-81  1205	George J. Carrette <GJC at MIT-MC> 	Rules    
Date: 10 April 1981 14:19-EST
From: George J. Carrette <GJC at MIT-MC>
Subject:  Rules
To: HEDRICK at RUTGERS
cc: lisptiming at SU-AI, RPG at SU-AI


I too was suprised at the chastising about breaking rules that
went on here considering its relevance that classic law of programming,
"if a function has more than three arguments then they are
 probably in the wrong order."

In point of fact, all the common so-called "lexprs," PROGN,
TIMES, PLUS, LIST, are specially handled by the compilers
of the lisp system's I'm familiar with.
Furthermore, the maclisp and lispm programs that have
used user-defined multi-argument constructions have invariably
developed in the direction of passing arguments by "keywords"
where these keyword are pre-processed in some form or another
at compile-time.

Ah, there are some interesting possible ways of optimizing
keyword argument calling in VAX NIL. Maybe we can talk about
things like this at some time.

-gjc

∂11-Apr-81  1001	CSVAX.jkf at Berkeley 	result of pairs benchmark on franz.  
Date: 11 Apr 1981 09:56:26-PST
From: CSVAX.jkf at Berkeley
To: rpg@su-ai
Subject: result of pairs benchmark on franz.

							pair benchmark results
							submitted by j foderaro
							(csvax.jkf@berkeley)
							10 april 81

Here are the results or running the pairs benchmark on Franz Lisp on 
a VAX 11/780 runing Berkeley 4BSD Unix.  The load average was less
than one when the timing was done.  Timing on Unix is done in 60ths of
a second and the time charged to a process includes some of the system
overhead for memory management.  I ran the benchmarks on an unloaded
system to reduce the interference of the memory manager.

The program was run with modification only to the test function
shown below.  Perhaps you should in the future use macros like (cpu-time) 
and (gc-time) and each site can define these macros.

(defun test ()
 ((lambda (t1 x gt)
	  (setq x (pairs a b () 'equal () () ()))
	  (setq t1 (- #-Franz (runtime) 
		      #+Franz (car (ptime)) 
		      t1))
	  (setq gt (- #-Franz (status gctime)
		      #+Franz (cadr (ptime)) 
		      gt))
	  (print (length x))
	  (print (list 'runtime
		       (QUOTIENT (FLOAT  (- t1 gt))
				 #-Franz 1000000.
				 #+Franz 60.)))
	  (print (list 'gctime
		       (quotient (float gt) 
				 #-Franz 1000000.
				 #+Franz 60.)))
	  #+Franz (terpri))
  #-Franz (runtime) #+Franz (car (ptime)) 
  ()
  #-Franz (status gctime)
  #+Franz (cadr (ptime))))


---
The size of the compiled file is 2768 bytes.

Here are the results::

Script started on Fri Apr 10 22:16:58 1981
Reval: lisp
Franz Lisp, Opus 34
-> (load 'pairs)
[fasl pairs.o]
t
-> (test)
2592(runtime 7.033333333333333)(gctime 23.53333333333333)
nil
-> (test)
2592(runtime 7.383333333333333)(gctime 4.816666666666667)
nil
-> (test)
2592(runtime 7.283333333333333)(gctime 4.366666666666667)
nil
-> (test)
2592(runtime 7.333333333333333)(gctime 4.666666666666667)
nil
-> (exit)

------
I looked at which functions were being called and it seems just about all this
benchmark does is call 'member' 10,000 times.  I noticed that in our system
'memq' would do as good a job as 'member' so I replaced member by memq
and ran it with these results:

Reval: lisp
Franz Lisp, Opus 34
-> (load 'memqpairs)
[fasl memqpairs.o]
t
-> (test)
2592(runtime 1.683333333333333)(gctime 23.55)
nil
-> (test)
2592(runtime 1.733333333333333)(gctime 4.833333333333333)
nil
-> (test)
2592(runtime 1.766666666666667)(gctime 4.35)
nil
-> (test)
2592(runtime 1.783333333333333)(gctime 4.7)
nil
-> (exit)
script done on Fri Apr 10 22:21:50 1981

∂13-Apr-81  1320	RPG  
To:   lisptranslators at SU-AI   
First, in SCCPP there are functions with 7 arguments. For example,
the first function starts out:

(DEFUN PAIRS 
       (X Y MUST-APPEAR FUN APPLY-CONSTRAINTS CONSTRAINTS
	  NIL-PAIRS) ...)

I suggest the following translation:

(DEFUN PAIRS n
       ((LAMBDA (X Y MUST-APPEAR FUN APPLY-CONSTRAINTS CONSTRAINTS
		  NIL-PAIRS) ...)
	(ARG 1)(ARG 2)(ARG 3)(ARG 4)(ARG 5)(ARG 6)(ARG 7)))

(*list a1 ... an) => (cons a1 (cons a2 ...(cons an-1 an)))

(*catch x y) evaluates the form y. x should EVAL to a tag. If y returns
normally, the value of the *catch is the value of y. If the evaluation
of y entails the evaluation of a form like (*throw q v) where q EVALs
to the same tag that x did, then v is evaluated and the value of the *catch
is the value of v. Unless, there is an intervening *catch with the same
tag...

MAPCAN is MAPCAR with NCONC instead of CONS.

1+, +, < etc are FIXNUM versions of ADD1, PLUS, LESSP etc.

(FUNCALL fun x1 ... xn) evaluates all of its arguments and
applies the value of fun to the arguments x1 ... xn. So
(FOO a b c d) = (FUNCALL 'FOO a b c d)

			-rpg-

∂13-Apr-81  1239	RPG  	Groundrules (reprise)   
To:   lisptiming at SU-AI   
I believe that at the outset of this project I stated what my goals
were, and the order of importance. But I will reiterate them anyway:

1. I want to provide for each major type operation (variable lookup,
function call,...) a relative, hopefully total, order on the various LISPs
and LISP systems around. I also hope that there is enough standardized
timing facilities around to at least get some idea of the approximate
absolute speeds as well.

2. I want to determine, myself, what the qualitative differences between
the various LISP systems around are. I hope to do this by watching how various
programs and constructs are handled by the translators. At first I hoped
that the ``translator volunteers'' would be just that, but now it seems I
will need to do most of the translations myself, which is ok as long as
I can merely provide the framework and not exact working code. If you
want a NIL/MacLisp person to propose the exact program whose timing is
universally reported as the performance of your favorite LISP system, then
I might be more willing to do everything myself.

3. Having ``rules'' is absolutely fair. First, one certainly cannot look
at specializations of a program to the data. Moreover, innate laziness (no
`major conversions' to quote Hedrick) dictates that the programs should be
examined as little as possible. Arguing style is totally irrelevant.
Suppose I wanted to test function call time and proposed factorial in its
recursive form, it is totally opposed to the spirit of what that tests to
translate it into an iterative program no matter what absolute style
considerations you bring to bear. One of the things I wanted to test 
with this program is how functions with more than 5 arguments behave
and are handled with different systems. This is totally fair.

As pointed out, the standard > 5 argument LEXPR conversion is

	(defun foo (a1 ... an) ...) =>
	(defun foo n ((lambda (a1 ... an) ...) (arg 0) ... (arg n)))

I was flip with Chuck because we spent 2 delightful years at Illinois
together a few years back.

4. Perhaps I should have sent out more help with this program, and
in the future I will, but another point of this benchmark was to test
the testing system. 

From now on I will provide with each benchmark a description of each
`primitive' in the program(s) that is not in McCarthy along with translation
tips for those facilities that are not universal. As time goes on and I
become truly familiar with all the systems, I will provide specific help
for each LISP.
			-rpg-

∂13-Apr-81  1338	CLR at MIT-XX 	Re: Groundrules (reprise)     
Date: 13 Apr 1981 1634-EST
From: CLR at MIT-XX
Subject: Re: Groundrules (reprise)   
To: RPG at SU-AI
cc: pdl at MIT-XX
In-Reply-To: Your message of 13-Apr-81 1539-EST

Dear rpg,
	I would like to if possible participate in some of the LISP timing
tests using MDL on the 20.  Unfortunately, MDL is similar to LISP (especially
in spirit) but vastly different in detail.  In order to (for instance) run
your first proposed benchmark, a gross translation will be required.  As
starting points, MDL has no LAMBDA, PROGN, LIST*,MAPC, MAPCAN...  All of 
these things can be accomplished in MDL but in a vastly different way.

	My question is whether I should participate in the timings under
these circumstances or should MDL drop out due to extreme differences.

-Chris Reeve
-------

I would like to see what MDL has to say about these issues. I don't object
to having languages that are not recognizably LISP as long as they are
appropriate languages for AI research. ``Appropriate'', I think, means
at least being programming environment based as opposed to the sort
of batch based environment that PASCAL-like languages encourage. In any event
at the minimum send me your conception of the first benchmark.
∂13-Apr-81  1724	YONKE at BBND 	Re: Groundrules (reprise)     
Date: 13 Apr 1981 2022-EST
Sender: YONKE at BBND
Subject: Re: Groundrules (reprise)   
From: YONKE at BBND
To: RPG at SU-AI
Message-ID: <[BBND]13-Apr-81 20:22:10.YONKE>
In-Reply-To: Your message of 13 Apr 1981 1239-PST

Dick, I agree with your summary (or reiteration) of your last
message.  (I can always write a program that makes my lisp look
good and under the right conditions make the others look like
shit, e.g. DREVERSE on Interlisp-Jericho is faster than a speeding
banana, but other things may be slow.)

Sorry we didn't get a chance to talk at the lisp meeting -- maybe
next time.  I'm going to be out of communication for the rest
of this month (sailing in the Virgin Islands).  So apologises
for the lack of messages.

Bye the way, are the timings "official" in the sense of being
sponsored by an agency or is this your "pet project"?

Martin

The project is currently a siphoning from other Stanford money,
but McCarthy wants to get some machine time and a grad student
to do some of the correlation.
				-rpg-
∂13-Apr-81  1934	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Re: Groundrules (reprise)       
Date: 13 Apr 1981 2134-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Re: Groundrules (reprise)   
To: RPG at SU-AI
In-Reply-To: Your message of 13-Apr-81 1439-CST

Your "standard > 5 argument LEXPR conversion" is not correct in UCI-LISP
and, presumably, not in Hedrick's Lisp.  The compiler does not handle the
lambda with > 5 args whether it is a defined function or an internal
lambda.

Now I ask, where does your word "standard" come from?

Again, I'd like to point out that Hedrick's original program should be
ok.  He pointed out that he could not handle > 5 args and handled it in
a way that would not affect the timing in any significant way (I'd say
less than .1%).

By the way, I checked the LISP OARCHI file and found that the >5 args was
handled beginning in November 1972.


(While I was checking on that, I got your next message.  You still are thinking
that a compiler would have no trouble with 5 args to a Lambda even if it
can't handle 5 args to the Lambda of the function definition.  The answer, of
course, is to use nested LAMBDAs or a PROG and SETQs.  From the original
entry of 11/72 into LISP OARCHI it looks like MACLISP does the same thing when
it finds an internal LAMBDA with >5 args as it does with a function of >5 args.)
-------

Register/Stack Allocation of LAMBDA vars
It all depends on your compiler or at least on the level of technology
embraced thereby. The MacLisp compiler (and, I think, the InterLisp compiler)
both take internal LAMBDAs to mean a naming (or binding) of temporary
locations, normally on the stack but possibly in the registers. Modern
hardware (cache memories) blurs the necessity of register desirability
for speed of access. 

Also, register passing meant one didn't need to pass the number
of arguments as in the LEXPR case, where a stack passing protocol is
used, and where the adjustment of the stack is explicitly required.

Thus, there is a distinction between function interface LAMBDAs and
internal LAMBDAs in some compilers. Since there can be n active LAMBDA
variables at any one time, the need for stack allocation (binding)
is necessary in any compiler. The stack allocation, though, can be
done as a register save operation rather than as a pre-planned allocation,
so I guess you could have a reasonable compiler which register allocates
though I wouldn't have thought it was still done that way.
So, translate the > 5 case as best you can and tell me how you decided on it.
			-rpg-
∂13-Apr-81  2214	HEDRICK at RUTGERS 	Re: Groundrules (reprise)     
Date: 14 Apr 1981 0111-EST
From: HEDRICK at RUTGERS
Subject: Re: Groundrules (reprise)   
To: RPG at SU-AI
In-Reply-To: Your message of 13-Apr-81 1539-EST

The problem with your proposed paraphrase of expr's with > 5 args is
that it uses the construct that we are trying to avoid, namely a
lambda with > 5 args.  I am sure I can come up with something...

It seems that for your project to work, we are going to have to send
back the translation.  Else how can you judge all of what you want to
judge?  What I will probably do is build up a MACLisp conversion package
(assuming that many of the tests are in MACLisp - if you contemplate
doing each one in a different dialect things could get tense).  Any
preferences as to the form in which I send the translation? At this
point I can just send you the MACLisp package.  But as time goes on that
will get longer and longer and have less and less specific relevance to
any one test.  I do not contemplate rewriting the function in UCI Lisp,
but mostly defining the functions and then using the test in close to
its original form.  Possibly you will say that as long as I can implement
the constructs in the original, you don't really care to see how I do
it.  Is that the case?



-------

Misunderstanding
I guess what I said to Tyson (enclosed) is relevant. I was under the impression
that internal LAMBDAs could compile onto the stack directly, rather than
through a register-save operation. So my comments about the LEXPR conversion
simply did not apply to your system. But, I've learned something important
about your LISP, which is what I wanted to do.

From now on I hope to specify what it is that is tested with each benchmark
and be able to trust that you're doing the right thing, though I would like
to see the resulting code for the benchmark and/or the translation program.

What I may end up doing is to send out both the original benchmark
and my Maclisp translation to you in order to make things easier.

About LIST* (which I mistakenly called *LIST through a weird spoonerism)
I think we all ought to get straight which LISPs do macros, and when they
are to be used. For example, I think that LIST* is turned into the
CONS form and then compiled, so I recommend a macro version of it.

Attached message to Tyson:
Subject: Register/Stack Allocation of LAMBDA vars
It all depends on your compiler or at least on the level of technology
embraced thereby. The MacLisp compiler (and, I think, the InterLisp compiler)
both take internal LAMBDAs to mean a naming (or binding) of temporary
locations, normally on the stack but possibly in the registers. Modern
hardware (cache memories) blurs the necessity of register desirability
for speed of access. 

Also, register passing meant one didn't need to pass the number
of arguments as in the LEXPR case, where a stack passing protocol is
used, and where the adjustment of the stack is explicitly required.

Thus, there is a distinction between function interface LAMBDAs and
internal LAMBDAs in some compilers. Since there can be n active LAMBDA
variables at any one time, the need for stack allocation (binding)
is necessary in any compiler. The stack allocation, though, can be
done as a register save operation rather than as a pre-planned allocation,
so I guess you could have a reasonable compiler which register allocates
though I wouldn't have thought it was still done that way.
So, translate the > 5 case as best you can and tell me how you decided on it.
			-rpg-
∂21-Apr-81  1316	RPG  	SCCPP    
To:   lisptranslators at SU-AI   
I have only heard from SAIL, Multics, Franz, and Rutgers on the
first timing benchmark so far. Please send your results. To see
what people have done so far you can look at: 
	SAIL:RESULTS.TIM[TIM,LSP]
No password to FTP away or to TYPE it out.
			-rpg-

∂13-Mar-81  1959	MEEHAN at MIT-AI (James R. Meehan) 
Date: 13 MAR 1981 2132-EST
From: MEEHAN at MIT-AI (James R. Meehan)
To: RPG at SU-AI

I was "volunteered" into the LISP Timing Mailing List, but I'd like
to get off it if it requires reading the volumes of mail I've seen in
the last few days. 
  I'm in charge of UCI LISP and UCI MLISP (University of California at
Irvine) and I have more than a passing interest in LISP-related projects,
but my acount at MIT-AI is used perhaps once a week for checking mail
and messages, and [important] I do this at 300 baud. If you can send
some of the correspondence by US mail (as opposed to all the day-to-day
stuff), I'd be interested. 
  I should also put in a pitch for mentioning your project in the
SIGART Newsletter, which is read by about as large a LISP audience
as anything. 
 Cordially -  Jim Meehan

∂31-Mar-81  1615	Deutsch at PARC-MAXC 	Re: Timing Benchmarks  
Date: 31 Mar 1981 16:14 PST
From: Deutsch at PARC-MAXC
Subject: Re: Timing Benchmarks
In-reply-to: RPG's message of 31 Mar 1981 1451-PST
To: Dick Gabriel <RPG at SU-AI>

It is vitally important that timing tests NOT be limited to made-up code
fragments, but include systems which are in heavy use NOW.  We got burned
very badly on Interlisp-D because of completely erroneous ideas about what
things actually got used a lot.

Right, but I'm having trouble getting people to contribute benchmarks!
			-rpg-
∂21-Apr-81  1604	Greenberg.Symbolics at MIT-Multics 
Date:  21 April 1981 19:04 est
From:  Greenberg.Symbolics at MIT-Multics
To:  Dick Gabriel <RPG at SU-AI>
In-Reply-To:  Message of 21 April 1981 15:58 est from Dick Gabriel

Well, since I sent you these results, I have done some more experimentation.
The MIT-AI results can be written off as user load. Yes,
indeed, of course those files were compiled.   I am afraid that
I will have to stand by those numbers and conclude that consing
is DAMN SLOW.  The -10 can allocate a cons in 1 XCT; a routine of
some length is involved on Multics. Oh well. That's why I
work with Lisp Machines now.

∂07-Apr-81  1037	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Re: Rules        
Date:  7 Apr 1981 1230-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Re: Rules    
To: RPG at SU-AI
cc: hedrick at RUTGERS

Not being totally familiar with MACLISP, I didn't realize that MACLISP
automatically made LEXPRs out of EXPRs with 6 or more arguments.  UCI-Lisp
(which is presumably close to what Hedrick is running) does have LEXPRs
but does not provide for automatic conversion.

What I did, and I think it is valid, was to make the last three args into
a list.  Any argument with that technique?  I don't separate the list
except where I need it.

It appears to me that the major portion of the time is spent in PAIRS1
rather than in the sections of code that involve the >5-argument functions.
The difference in argument passing for them should be swamped by the
recursion in PAIRS1.   (I just ran a test and 97% of the time is spent in
PAIRS1 (interpreted).)
-------

∂07-Apr-81  1107	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Rules - GC time  
Date:  7 Apr 1981 1259-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Rules - GC time
To: rpg at SU-AI

The first times that I ran SCCPP, I arbitrarily chose to use approximately
30000 (decimal) free space.  The ratio of runtime versus GC time was
approximately the same as what you had supplied.  However I later ran it
with twice that space.  I was now getting exactly one GC per test (each
recovering approximately the same amount of space).  My total GC time
had dropped a factor of 3 1/2.

UCI-Lisp uses the standard mark and sweep routine.  Obviously there is
an overhead associated with each GC.  Furthermore, a GC during which
most of the core is tied up is more costly than one in which free space
is almost empty.  Thus GC time required for a problem is a hard point
to pin down.

If I were to set my free space up so that I totally filled memory, I
would have less than one GC per run on the average.  If I used more free
space than physical core would allow (not possible on our 20 but a problem
on our 10), I would swap GC time for paging time.  This would seem to be
unfair as paging time could be considered either the jobs cost or system
overhead (like swapping).  On our 10, increasing core size to beyond
the physical space dramatically increases run time because of paging costs.

It might be a reasonable idea to specify a maximum amount of free space
(or should it be total core used?) in which a program can be run.  This
may not be possible for the Lisp machines.  An alternative idea would be
to adjust core size so that you get exactly one GC per test.  Suppose that
you start off with a fresh free space and run the problem, GCing only when
it was done.  This would count the cost to collect the words used once
without counting what it would cost to track through the temporary data
during the running of the program (which is dependent on when the GC
happens).   I feel this would be a reasonable comparison to the costs
associated with having a parallel GC or a machine with such a large
address space so as to never have a GC.
-------

∂07-Apr-81  2213	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	SCCPP on UCI-Lisp
Date:  8 Apr 1981 0001-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: SCCPP on UCI-Lisp
To: rpg at SU-AI

		Results of LISP timing tests for UCI-Lisp

(Times are in the form R+G where R is the runtime (not including GC time)
and G is the GC time.  All times are in seconds.  "Interp" means interpreted,
"Slow" means compiled but using UUO-links, "Fast" means compiled with the UUO
links replaced by direct jumps.)

				   Processor
Program			KL-2060			KI-1060
	   Interp	Slow	  Fast	     Interp	  Slow		Fast

SCCPP:
 Free:
  30000	 14.00+2.78  5.32+2.38	3.38+2.78   53.43+8.58  21.14+11.62 12.77+11.56
	 14.04+2.83  5.37+2.70	3.35+2.71		20.40+11.47 12.63+11.36
				3.34+2.70		20.72+11.50 12.71+11.44

  60000	 14.60+0.58  5.40+0.50	3.35+0.53   52.80+1.42	21.18+1.45  12.81+1.39
	 14.20+0.64  5.44+0.52	3.36+0.53		21.27+1.38  12.34+1.43
	 14.09+0.63  5.37+0.52	3.35+0.52		21.19+1.40  12.93+1.40
	 14.22+0.61  5.35+0.52	3.40+0.53

  Notes: The functions with more than 5 arguments were changed to 5 argument
functions by making the last 3 into a list.  These were involved in less than
3% of the run time so the difference is insignificant.  The timings on the
2060 were with a load average less that 1.  The timings on the KI-10 were with
a moderate load.  The differences in the various timing runs are probably due
to system overhead due to swapping.  The 30K free space examples had about 5
or 6 GC's while the 60K free space examples had one GC each.
-------

∂21-Apr-81  2018	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Lost mail⊗? 
Date: 21 Apr 1981 2217-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Lost mail⊗?
To: rpg at SU-AI

Enclosed is a set of all my messages to you.  Apparently some of them (in
particular the results I sent you just before the Lisp meeting) got lost.
I checked TIMING.MSG[TIM,LSP] and only found two msgs from me in there.
I also saw that one of them apparently had been replied to - but I never got
a reply.  I don't know what is going on, but please acknowledge receipt of
this.
 6-Apr-81 16:05:21-CST,582;000000000001
Date:  6 Apr 1981 1605-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Re: Timing benchmark   
To: RPG at SU-AI
In-Reply-To: Your message of 6-Apr-81 1502-CST

I suggest that each program submitted for everyone to run get a unique
name so we can describe it simply.   Also, please name the language
in which it was written.

Also, it wasn't clear from your message whether the code got compiled or
ran interpreted for the times you got.  (Only from looking at the code,
I'd guess it ran interpreted since there was nothing that seemed to compile
it.)
-------
 7-Apr-81 12:30:27-CST,911;000000000001
Date:  7 Apr 1981 1230-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Re: Rules    
To: RPG at SU-AI
cc: hedrick at RUTGERS

Not being totally familiar with MACLISP, I didn't realize that MACLISP
automatically made LEXPRs out of EXPRs with 6 or more arguments.  UCI-Lisp
(which is presumably close to what Hedrick is running) does have LEXPRs
but does not provide for automatic conversion.

What I did, and I think it is valid, was to make the last three args into
a list.  Any argument with that technique?  I don't separate the list
except where I need it.

It appears to me that the major portion of the time is spent in PAIRS1
rather than in the sections of code that involve the >5-argument functions.
The difference in argument passing for them should be swamped by the
recursion in PAIRS1.   (I just ran a test and 97% of the time is spent in
PAIRS1 (interpreted).)
-------
 7-Apr-81 12:59:51-CST,2112;000000000001
Date:  7 Apr 1981 1259-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Rules - GC time
To: rpg at SU-AI

The first times that I ran SCCPP, I arbitrarily chose to use approximately
30000 (decimal) free space.  The ratio of runtime versus GC time was
approximately the same as what you had supplied.  However I later ran it
with twice that space.  I was now getting exactly one GC per test (each
recovering approximately the same amount of space).  My total GC time
had dropped a factor of 3 1/2.

UCI-Lisp uses the standard mark and sweep routine.  Obviously there is
an overhead associated with each GC.  Furthermore, a GC during which
most of the core is tied up is more costly than one in which free space
is almost empty.  Thus GC time required for a problem is a hard point
to pin down.

If I were to set my free space up so that I totally filled memory, I
would have less than one GC per run on the average.  If I used more free
space than physical core would allow (not possible on our 20 but a problem
on our 10), I would swap GC time for paging time.  This would seem to be
unfair as paging time could be considered either the jobs cost or system
overhead (like swapping).  On our 10, increasing core size to beyond
the physical space dramatically increases run time because of paging costs.

It might be a reasonable idea to specify a maximum amount of free space
(or should it be total core used?) in which a program can be run.  This
may not be possible for the Lisp machines.  An alternative idea would be
to adjust core size so that you get exactly one GC per test.  Suppose that
you start off with a fresh free space and run the problem, GCing only when
it was done.  This would count the cost to collect the words used once
without counting what it would cost to track through the temporary data
during the running of the program (which is dependent on when the GC
happens).   I feel this would be a reasonable comparison to the costs
associated with having a parallel GC or a machine with such a large
address space so as to never have a GC.
-------
 8-Apr-81 00:01:56-CST,1506;000000000001
Date:  8 Apr 1981 0001-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: SCCPP on UCI-Lisp
To: rpg at SU-AI

		Results of LISP timing tests for UCI-Lisp

(Times are in the form R+G where R is the runtime (not including GC time)
and G is the GC time.  All times are in seconds.  "Interp" means interpreted,
"Slow" means compiled but using UUO-links, "Fast" means compiled with the UUO
links replaced by direct jumps.)

				   Processor
Program			KL-2060			KI-1060
	   Interp	Slow	  Fast	     Interp	  Slow		Fast

SCCPP:
 Free:
  30000	 14.00+2.78  5.32+2.38	3.38+2.78   53.43+8.58  21.14+11.62 12.77+11.56
	 14.04+2.83  5.37+2.70	3.35+2.71		20.40+11.47 12.63+11.36
				3.34+2.70		20.72+11.50 12.71+11.44

  60000	 14.60+0.58  5.40+0.50	3.35+0.53   52.80+1.42	21.18+1.45  12.81+1.39
	 14.20+0.64  5.44+0.52	3.36+0.53		21.27+1.38  12.34+1.43
	 14.09+0.63  5.37+0.52	3.35+0.52		21.19+1.40  12.93+1.40
	 14.22+0.61  5.35+0.52	3.40+0.53

  Notes: The functions with more than 5 arguments were changed to 5 argument
functions by making the last 3 into a list.  These were involved in less than
3% of the run time so the difference is insignificant.  The timings on the
2060 were with a load average less that 1.  The timings on the KI-10 were with
a moderate load.  The differences in the various timing runs are probably due
to system overhead due to swapping.  The 30K free space examples had about 5
or 6 GC's while the 60K free space examples had one GC each.
-------
13-Apr-81 21:34:35-CST,1298;000000000001
Date: 13 Apr 1981 2134-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Re: Groundrules (reprise)   
To: RPG at SU-AI
In-Reply-To: Your message of 13-Apr-81 1439-CST

Your "standard > 5 argument LEXPR conversion" is not correct in UCI-LISP
and, presumably, not in Hedrick's Lisp.  The compiler does not handle the
lambda with > 5 args whether it is a defined function or an internal
lambda.

Now I ask, where does your word "standard" come from?

Again, I'd like to point out that Hedrick's original program should be
ok.  He pointed out that he could not handle > 5 args and handled it in
a way that would not affect the timing in any significant way (I'd say
less than .1%).

By the way, I checked the LISP OARCHI file and found that the >5 args was
handled beginning in November 1972.


(While I was checking on that, I got your next message.  You still are thinking
that a compiler would have no trouble with 5 args to a Lambda even if it
can't handle 5 args to the Lambda of the function definition.  The answer, of
course, is to use nested LAMBDAs or a PROG and SETQs.  From the original
entry of 11/72 into LISP OARCHI it looks like MACLISP does the same thing when
it finds an internal LAMBDA with >5 args as it does with a function of >5 args.)
-------
13-Apr-81 21:46:53-CST,435;000000000001
Date: 13 Apr 1981 2146-CST
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Timing msg on last msg from me
To: rpg at SU-AI

When I sent my last msg to you (a few minutes ago), I didn't realize that
I was getting messages then that you had sent some 7 hours earlier.
Our system was down so we were just getting the mail.  This should explain
my comment about getting a msg from you while I was typing that one in.
-------
-------

∂06-Apr-81  1204	RPG  
 ∂03-Apr-81  1531	ML   
 ∂02-Apr-81  2346	KASHTAN at SRI-AI 	Re: franzlisp   
Date:  2 Apr 1981 2348-PST
From: KASHTAN at SRI-AI
Subject: Re: franzlisp 
To: ML at SU-AI
In-Reply-To: Your message of 2-Apr-81 2301-PST

1) Does franzlisp pose any restrictions of its own on the address space
	available to a user?
		Yes and no.  Since franz lisp uses a segment based allocation
		scheme and a bitmap for marking data structures during garbage
		collection there are static arrays (for the bitmap and for
		segment descriptors) that place an upper bound on the size
		of the franz lisp address space.  It is not a hard limit,
		though.  Re-compiling the interpreter with larger static
		data structures will allow you to use a larger address space.
		I think the default maximum is around 5 Mbytes.  Where you
		are going to really run into trouble is in the virtual address
		space which Unix gives you.  There was a design flaw in the
		vax memory architecture (driven by the fact that VAXen had
		to originally run with only 128Kb memory) which made the page
		size very small (512 bytes).  To get around the horrendous size
		of page tables required for this DEC went to a 2 level paging
		scheme and paged the user's page tables. Unfortunately, UNIX
		does not page the user page tables.  The difference here is
		that while you would normally (if paging the page tables) only
		require 4 bytes of resident physical memory for every 64Kb of
		virtual memory -- on UNIX you require 516 bytes of resident
		physical memory for every 64Kb of virtual memory.  This adds
		up very quickly.  So you can pretty much rule out humungous
		(eg 128Mbyte) address spaces on your VAX for the time being.
2) Does it run resonably fast (either under unis or unix), particularily
	the compiled code?
		If you are comparing compiled code to what you are used to in
		MACLISP, no (particularly for heavy arithmetic computation).
		We tried to use Franz Lisp for some Image Analysis stuff and
		it just was too slow to be usable.  There have been some
		recent fixes to the compiler to improve performance in array
		accessing and arithmetic computation but these have really
		not been sufficient for our purposes.  I think we are now
		betting on NIL (which should port to Unix quite trivially).
3) How difficult is it to write a driver for a device hanging off the
	unibus on a dr11-b(this is a dma device, we would be interfacing
	a grinell) for the operating system?
		Grinell's on dr-11b's are absolutely trivial.  There are
		probably 1/2 dozen Unix drivers (all working) available to
		run your Grinnel through a dr-11b.  Your best bet would be
		Mike Accetta (accetta@cmua).
4) What are your general impressions of franzlisp as a workable system
	on a vax?
		If you are really interested in finding out about Franz Lisp
		as a system building tool on the VAX, I would suggest talking
		to Andy Witkin (witkin@sri-ai).  He is the resident MACLISP
		person (ex-MIT vision person) around here and has tried to
		use Franz Lisp as a MACLISP substitute on the VAX.


Sorry to be presenting such a generally dim view of the world.  The VAX is
just starting to mature as a research tool -- things are still kind of bare.
On the bright side,  you will definitely want to be using Gosling's EMACS for
the VAX.  A very winning editor!!

David
-------

∂14-Apr-81  2031	RPG  
To:   RPG at SU-AI, jonl at SU-AI
 ∂14-Apr-81  2022	HEDRICK at RUTGERS 	revised (final⊗?) version of report on Lisp conference 
Date: 14 Apr 1981 1237-EST
From: HEDRICK at RUTGERS
Subject: revised (final⊗?) version of report on Lisp conference
To: eis at MIT-AI, geoff at SRI-KL, boyer at SRI-KL, engelmore at USC-ISI,
To: dreifus at WHARTON-10
Redistributed-To: Bboard at SRI-CSL, bboard at SCORE, bboard at SAIL
Redistributed-By: GEOFF at SRI-CSL
Redistributed-Date: 14 Apr 1981

I have received a number of comments on my earlier report.  Here is a
revision that incorporates most of them.  It is still quite informal
in tone, and should probably not be circulated outside the ARPAnet
community.

---------------------------------------------------------------------
This is a slightly revised report on the Lisp conference at SRI.  It
has been modified to reflect some of the comment of people who read
the first draft. The conference was called by ARPA to discuss the future
of Lisp in the AI community, as well as the proposal for ARPA to buy
Dolphin computers for use within this community.  It lasted from 8:30 am
to about 10pm on 8 April.

One of the major concerns that motivated the meeting was a feeling that
there are suddenly many projects to reimplement Lisp, and that we may
end up with many half-baked, incompatible implementations.  There had
been some hope for getting some coherence among them.  As far as I can
see, these fears are somewhat justified, and little was accomplished
towards creating coherence.  There were in fact 13 Lisp implementation
projects listed.  (Some of these have been finished, however, so that
number is a bit too big.)  Fortunately, none of them are creating new
languages.  Rather they are reimplementations of existing dialects for
new hardware.  Thus there is somewhat less chaos than the number 13
would imply.  Here they are.  All results are publically available
unless stated otherwise.

Interlisp:
   SRI portable Interlisp [SRI].  not yet funded.  projected to take 18
	months once it is funded.  They are thinking of the VAX, F-5, or
	any 32-bit machine that they can get funding for.
   Interlisp-D [Xerox].  For Dophin, a Lisp machine with bit-mapped
	display.  Finished.
   Interlisp-Jericho [BBN].  For BBN's Jericho, a high-performance
	personal computer.  in progress, projected to be ready May,
	1982.  I believe results will be proprietary.
   VAX Interlisp [USC-ISI].  They hope to have the "virtual machine"
	(i.e. the lowest level of the interpreter) running by June, but
	it will be up to a year before the whole environment is working.
	This is the most critical of all the projects as far as most
	users are concerned. 
MACLisp:
   Lisp machine [MIT].  A version of Lisp extended to make heavy use of
	bit-mapped display, and having a number of new language
	features. Finished some time ago, but development continues.
	Results are proprietary, with MIT licensing two commerical
	companies to distribute them.
   NIL [MIT].  Intended to have everything that the Lisp machine has
	that will be useful on a conventional machine.  Also
	high-performance compiler.  Will have something by end of the
	summer, but development will be ongoing.  Mainly intended for
	VAX, but probably will be done for extended-addressing 20.
   S1-NIL.  NIL for the S1.  This is a multi-CPU supermachine sponsored
	by the military (Navy?).  Projected to be ready in about 2.5
	years.
   Spice Lisp [CMU].  Dialect of MACLisp for personal computers.  Will
	use microcode assist.  First implementation will be on VAX and
	extended addressing DEC-20 by simulating the proposed microcode.
	It is unclear whether the VAX and DEC-20 versions will be usable
	Lisps or not. Officially, they are intended mainly for debugging
	until the personal machine  hardware is available.  However they
	obviously have hopes that these will be usable in their own
	right. [see below for comments on this] Projected to be ready in
	early 1982.
   Franz Lisp [Berkley].  MACLisp dialect for VAX.  finished.  Many
	people seem to be unenthusiastic about this, but it seems to be
	a solid implementation.  Maybe a trifle slower than it might be
	and somehow not as "sexy" as a typical MIT product.
Other dialects:
  Standard Lisp (Utah) - This is really a research project in
	portability. They are trying to write as much of Lisp as
	possible in Lisp. The compiler has extensions to allow it to be
	used for system programming.  Currently a very small part is
	still written in assembly language.  They should have an
	implementation for extended-address DEC-20 within 6 months.
  Elisp (Rutgers) - This is a recoding of R/UCI Lisp for extended
	addressing DEC-20.  This should be finished by the end of the
	summer.
  MDL [MIT] - This is not really a Lisp dialect.  It is intended as a
	successor to Lisp.  It has more data types and has been
	generally cleaned up.  they are working on a portable
	implementation.  There has been a DEC-20 implementation for
	years.  They now have an implementation that makes some use of
	extended addressing.  when the portable implementation is
	finished, it will be used to bring up a fully extended version
	for the DEC-20.  This is projected to be in 6 months.

Of all these, the project that generated the most interest was clearly
the VAX Interlisp.  Many people are committed to both the VAX and
Interlisp, and are in bad shape until this happens.

Now some comments as to why there are 13 projects.  I will not comment
much on the "other dialects".  MDL is its own thing, and a portable
implementation for it makes perfect sense.  Similarly, Utah's research
is a very interesting project.  In my opinion it is a more promising
approach than those being used by the Interlisp or MACLisp people, and
these folks would have been well advised to be cooperating more closely
with Utah than they are.  Our project is strictly a short-term fix to a
critical problem, and requires minimal effort.  That is its only
possible justification.  In the long run we will probably want to choose
a dialect that is going to be available on personal machines.

Now for the Interlisp and MACLisp projects.  They are all attempts to
implement existing languages for new hardware.  In the case of MACLisp
they also are trying to clean up and develop the language further.  

The Interlisp projects are coordinated with each other, at least in the
sense that they are all implementing the same language.  Also, much of
the Lisp code is shared.  However apparently the Interlisp "virtual
machine" (i.e. the part done in assembly language) is relatively large,
and the user interface (e.g. debugger) depends upon details of the stack
that are different for different machines.  Thus transporting Interlisp
is a fairly major project.  As far as I can see, the projects other than
SRI's were reimplementations of the virtual machine for particular
hardware, with no particular thought given to portability.  I think SRI
hopes to improve this.  The usefulness of this project will be affected
by who sponsors them, since certain funding arrangements could result in
a proprietary product.  There is a single manual that applies to all
Interlisp versions.  [This is more important than it sounds.]

The MACLisp projects are not particularly coordinated.  Each of them is
implementing different dialects, with separate (or non-existent)
manuals. In general the Lisp Machine seems to have the most features,
and the other dialects are probably more or less subsets of it.  Of the
current projects, there are considerable differences in style:
  Franz Lisp is the most conservative.  They wanted something up
	quickly, using existing technology.  It was done in C, which
	should help its transportability.
  Lisp Machine is the most radical.  First, it is standalone, with
	microcode support.  Second, it has every language feature one
	can imagine except for spaghetti stacks.  Finally, it supports
	the bit-mapped display.  They believe that many of its features
	could only be done on special-purpose hardware.  This might
	possibly be transportable to another microcodable machine with
	similar capabilities, though no particular thought was given to
	portability.
  Spice Lisp and NIL are in some sense compromises.  They are attempts
	at cleaning up the old language design, and taking many of the
	good ideas of Lisp Machine Lisp, but for somewhat more
	conventional machines.  At the moment these projects talk to
	each other, but are planning to implement somewhat different
	dialects.  They were strongly encouraged to talk to each other.
	They are both giving thought to portability, though SPICE is
	only intended to be portable among personal machines with
	certain capabilities.

The big question is, why so many projects?  As far as I can see,
here are the justifications:
  MDL - this is a new language design, with generally good ideas. It has
	been very influential on newer developments in Lisp.
  Utah - this is a research project in portability, and seems to be
	doing very good work.  In retrospect, the AI community would be
	much better off if Utah had decided to do their research using
	either Interlisp or MACLisp.  As it is, they have attempted to
	create a new dialect, and no one is interested.  Probably their
	work will be ignored, much to everyone's hurt, unless they
	decide to change to another dialect, which I do not expect.
  Franz Lisp and Elisp - these are projects to transport existing
	dialects to machines with lots of users that desperately needed
	the results. Elisp should die away eventually if other projects
	succeed. Franz Lisp may well survive.
  Interlisp - these projects are simply transporting Interlisp to other
	machines, which is always reasonable.  The only real criticism
	here would be that it is a shame that they can't develop
	technology to produce a new implementation more quickly.  But
	this is what SRI proposes to do.  In my opinion it is critical
	for that project to be done in such a way that the results are
	public.
  MACLisp - it is unfortunate that so much work is being done on MACLisp
	in an uncoordinated way.  There is some evidence that each
	project is made up of "true believers" who believe that there is
	little merit to the others.  We heard a somewhat amusing comment
	by one.  He said it wasn't true that there was chaos among the
	MACLisp people.  It was just that there were 4 different
	projects going in 4 different directions....

Everyone seems to believe that it is a good idea for there to be ongoing
work on both Interlisp and MACLisp.  Interlisp tends to be more
conservative in its development.  It has good user facilities and
well-coordinated support.  But there was a surprising concensus (that
included most of the Interlisp users) that
  - Interlisp is quite expensive compared to MACLisp
  - Interlisp as a dialect lacked a number of important features
	compared to MACLisp, and had no important features missing in
	MACLisp. Note that this comment refers only to facilities
	commonly used in programs, not to the user support environment.
	Many people believe that in user support Interlisp is better
	than MACLisp (except on the Lisp Machine, where MACLisp really
	shines)
Thus what was keeping people with Interlisp is
  - good user facilities
  - the fact that all implementations are compatible
  - good, complete documentation
  - good support

To the outside observer it appeared that in the long run MACLisp might
in fact take over if it could do the following:
  - supply the user facilities of the same power as Interlisp's, or
	document the fact that it already has them (if it does - I take
	no position on this question)
  - agree on a common language definition, with extensions for the Lisp
	Machine and others who need it
  - produce a complete manual, showing all the user facilities and being
	common among implementations.  A good form for this might be a
	loose-leaf binder, so that they could provide additional pages
	and chapters for the Lisp machine, and let you select which
	debugger and editor you wanted.
  - somehow assure users outside MIT that there was a central support
	organization that would respond to their concerns.  (This seems
	to be true, but there may be a PR problem.)
  - possibly do something to isolate non-MIT users from the hack-of-the
	week club, while preserving the fact that Maclisp will continue
	to develop more aggressively than Interlisp.  It is unclear
	whether there is a real problem here or a PR problem.

I am not convinced that the MACLisp folks can do this if left to
themselves. There was a proposal that ARPA might somehow be able to
cause it to happen.

Finally there is the question of what will happen next as far as
hardware acquisition.  We polled representatives of various user groups.
Most of them have DEC-10's and -20's.  They all say the PDP-10 is dead,
but are still doing most of their work on it.  However there are also a
significant number of people who use VAX as their primary system.  Some
of these are using Franz Lisp.  It seems fine.  But many are desperate
for VAX Interlisp.  Few people are currently using personal computers
for much, or even have definite plans to use them.  The main places that
are highly committed are
   MIT, where most Lisp work is using Lisp Machines
   BBN, which is committed to Jericho
   CMU, which is committed to Spice (in a somewhat indefinite future), 
   PARC, where most Lisp work is done with Dolphins and Dorados
   HPP(Sumex), which is committed to Dolphins
Others are hedging their bets with one or two experimental machines, or
are just plain waiting.

On the other hand, there are a number of complaints of a serious
shortage of cycles.  The most vocal is HPP/Sumex.  Thus there is some
pressure on ARPA to do something immediately.  HPP is pressuring them to
buy a large group of Dolphins.  There may also be pressure from
elsewhere, but it was not clear in this meeting.  A number of other
people have said that if ARPA does go this way, they will follow.  But
few people other than HPP feel strongly enough to commit to Dolphins
with their own money, unless ARPA does.  If ARPA does, then it will
become a de facto standard and no one wants to be left out.  People are
concerned that the Dophin is a bit too slow (apparently about 1/2 the
power of a Lisp Machine). The feeling is that a year ago it would have
been a great idea, but its time may have passed.  The problem is that
there is no obvious other candidate.  People advised ARPA to look around
carefully to see if there wasn't something better.  One problem is that
whatever it is must support Interlisp.  Thus the Lisp machine will only
work if they will put Interlisp-D on it.  This might be technically
feasible, but the Lisp Machine people aren't interested in doing it.
(Whether the smell of that much money will arouse their appetite remains
to be seen.) The Jericho isn't quite ready, nor are any other personal
computers, though many are on the horizon.  The only other thing that
could be done quickly would be a single user VAX, which DEC might be
able to build if they would, or a small Foonly (a PDP-10 equivalent).
DEC had a single user PDP-10, which would be an ideal solution, but
decided not to build it.  The guess is that even ARPA isn't a large
enough customer to cause DEC to do something it otherwise wouldn't do.
If there is nothing better than the Dolphin, I don't have much of a
guess whether ARPA will get it or not. I think they may flip a coin.

The problem is that the proposed buy of Dolphins would be a large
commitment, and might make it hard for ARPA to take advantage of a
better system that may show up in a year.  It is not just that the
hardware will be expensive.  It is that by the time you add in the
software and hardware support, the total is 3 times the actual hardware
purchase price.

At that point the meeting ended, with no conclusion.
-------

∂22-Apr-81  1801	Bernard S. Greenberg       <Greenberg at MIT-Multics> 	Multics Timing results vindicated  
Date:     22 April 1981 2057-est
From:     Bernard S. Greenberg       <Greenberg at MIT-Multics>
Subject:  Multics Timing results vindicated
To:       RPG at SU-AI
Cc:       CHoffman.mal at MIT-Multics, HIC at MIT-MC,
    lisptranslators at SU-AI

Laugh not at Multics nor Multics Maclisp, but at a pretty
cruddy open-code implementation of MAPCAN.  Due to footwork
by Carl Hoffman and myself, we found our compiler generating
calls to NCONC in the code for mapcan! Needless to say,t his
caused quadratic time expansion in list-searching! 
This was the cause of the 86-second run.  Recoding
mapcan as a macro keeping track of the end of the list cleverly,
I produced the following results of which I am no longer ashamed:

lisp
*
(setq base 10.)
10.
(load 'gabx)
t
(test)
2592.
(runtime 3.690232)
(gctime 2.478373)
t
(test)
2592.
(runtime 3.679003)
(gctime 3.930743)
t
(test)
2592.
(runtime 3.693353)
(gctime 2.650682)
t
(quit)

∂23-Apr-81  1232	RPG  	FRANZ Benchmark    (FRPOLY)
To:   lisptranslators at SU-AI   
Here, below, is the benchmark from Berkeley. It is in roughly
MacLisp syntax, but let me point out a few things about it.

First, DEFMACRO and the ` (backquote) syntax. DEFMACRO is
a mechanism for defining macros in MacLisp in which the form
is broken into named arguments, unlike standard MacLisp macros
with have exactly 1 argument which is the macro form itself (EQly
that form). The backquote syntax takes a form and produces code
to generate that form. A example helpe here:

	`(atom ,e) turns into (list 'atom e)
	`(signp e ,x) is (list 'signp 'e x)

Thus, , (comma) is the unquoting character.
For example, then, occurrences of (pcoefp x) in the code
below turn into (atom x) by the action of the macro
pcoefp. DEFMACRO provides a form which is substituted for
the calling form with arguments bound in the obvious manner.
Here is the equivalent standard MacLisp macro definition of
pcoefp:

	(defun pcoefp macro (x)
	       (list 'atom (cadr x)))

To run this benchmark interpretively, I suggest expanding the
macros once, either at read time or at first runtime. For those
who need it I can provide this file with macros expanded.

Another hack for defining these macros so that they are expanded
once only is:

(defun pcoefp macro (x)
  ((lambda (form)
    (rplaca x (car form))
    (rplacd x (cdr form))
    form)		   ;value of RPLACD assumed to be undefined
   (list 'atom (cadr x))))

LOCALF seems to be a declaration of LOCAL function names. For MacLisp
I've commented this out. SPECIAL means that there is a global
value cell and that binding is dynamic on that cell.

Here is what SIGNP does:

2) SIGNP IS NOW A FSUBR.  THE FIRST ITEM IN THE ARGLIST IS AN
INDICATOR FOR COMPARISON TO ZERO, E.G., (SIGNP LE N) IS NON-NIL
IF AND ONLY IF THE VALUE OF N IS A NUMBER LESS THAN OR EQUAL TO 
ZERO [SIGNP DOES NOT REQUIRE N TO BE OF NUMBER TYPE].  THE
INDICATORS FOLLOW THE PDP-10 ARITHMETIC COMPARISON INSTRUCTIONS, AND
SHOULD BE SELF EXPLANATORY:  L E LE GE N G 
[E means zerop, N means not zerop.]

(RUNTIM) and (STATUS GCTIME) return the number of microseconds of
total runtime and gctime. Note that gctime is included in
runtime in MacLisp.

There is a difference between `+' and `PLUS' in Franz, which is
that + takes 2 arguments, both fixnums (machine integers) and returns
a fixnum as its result. PLUS takes any number of any type of number and
returns the most appropriate type number. In the tests below, one of them
is designed to overflow the VAX machine integer range and drift into
BIGNUMs, which are any integer larger than the architecture supports. In MacLisp
and FRANZ there is a BIGNUM packake that allows one to have contiguous
words of memory represent one number. So, beware of where there are +'s and
PLUS's. The same is true for - and DIFFERENCE, * and TIMES, / and QUOTIENT,
> and GREATERP, < and LESSP, etc. Generic arithmetic is closed compiled
while specific type is open coded.

(ODPP x) tests if X is odd.

= is numeric EQUAL.

PDIFFER1 is mentioned but not defined; is not called for these tests, however.

Here's my transcript of SAIL MacLisp:

(setup)
(Z 1 1.0 0 (Y 1 1.0 0 (X 1 1.0 0 1.0))) 
(bench 2)
(POWER= 2 (0.017 0.0) (0.017 0.0) (0.016 0.0)) 
(bench 5)
(POWER= 5 (0.116 0.0) (1.334 1.084) (0.15 0.0)) 
(bench 10)
(POWER= 10 (2.534 1.8) (19.733 17.151) (8.983 7.901)) 
(bench 15)
(POWER= 15 (16.65 8.832) (112.516 89.298) (63.9 56.749)) 

Which I ran compiled. Times are in seconds.

The following is the benchmark. 
			-rpg-


;;;; Benchmark Commences:

;;; Franz Lisp benchmark from Fateman
;; test from Berkeley based on polynomial arithmetic.

(declare (special ans coef f inc i k qq ss v *x*
		    *alpha *a* *b* *chk *l *p q* u* *var *y*
		    r r2 r3 start res1 res2 res3))
(declare (localf pcoefadd pcplus pcplus1 pplus ptimes ptimes1
		 ptimes2 ptimes3 psimp pctimes pctimes1
		 pplus1))
;; Franz uses maclisp hackery here; you can rewrite lots of ways.
(defmacro pointergp (x y) `(> (get ,x 'order)(get ,y 'order)))

(defmacro pcoefp (e) `(atom ,e))
(defmacro pzerop (x) `(signp e ,x))			;true for 0 or 0.0
(defmacro pzero () 0)
(defmacro cplus (x y) `(plus ,x ,y))
(defmacro ctimes (x y) `(times ,x ,y))


(defun pcoefadd (e c x) (cond ((pzerop c) x)
			      (t (cons e (cons c x)))))

(defun pcplus (c p) (cond ((pcoefp p) (cplus p c))
			  (t (psimp (car p) (pcplus1 c (cdr p))))))

(defun pcplus1 (c x)
       (cond ((null x)
	      (cond ((pzerop c) nil) (t (cons 0 (cons c nil)))))
	     ((pzerop (car x)) (pcoefadd 0 (pplus c (cadr x)) nil))
	     (t (cons (car x) (cons (cadr x) (pcplus1 c (cddr x)))))))
	 
(defun pctimes (c p) (cond ((pcoefp p) (ctimes c p))
			   (t (psimp (car p) (pctimes1 c (cdr p))))))

(defun pctimes1 (c x)
       (cond ((null x) nil)
	     (t (pcoefadd (car x)
			  (ptimes c (cadr x))
			  (pctimes1 c (cddr x))))))

(defun pplus (x y) (cond ((pcoefp x) (pcplus x y))
			 ((pcoefp y) (pcplus y x))
			 ((eq (car x) (car y))
			  (psimp (car x) (pplus1 (cdr y) (cdr x))))
			 ((pointergp (car x) (car y))
			  (psimp (car x) (pcplus1 y (cdr x))))
			 (t (psimp (car y) (pcplus1 x (cdr y))))))

(defun pplus1 (x y)
       (cond ((null x) y)
	     ((null y) x)
	     ((= (car x) (car y))
	      (pcoefadd (car x)
			(pplus (cadr x) (cadr y))
			(pplus1 (cddr x) (cddr y))))
	     ((> (car x) (car y))
	      (cons (car x) (cons (cadr x) (pplus1 (cddr x) y))))
	     (t (cons (car y) (cons (cadr y) (pplus1 x (cddr y)))))))

(defun psimp (var x)
       (cond ((null x) 0)
	     ((atom x) x)
	     ((zerop (car x)) (cadr x))
	      (t (cons var x))))

(defun ptimes (x y) (cond ((or (pzerop x) (pzerop y)) (pzero))
			  ((pcoefp x) (pctimes x y))
			  ((pcoefp y) (pctimes y x))
			  ((eq (car x) (car y))
			   (psimp (car x) (ptimes1 (cdr x) (cdr y))))
			  ((pointergp (car x) (car y))
			   (psimp (car x) (pctimes1 y (cdr x))))
			  (t (psimp (car y) (pctimes1 x (cdr y))))))

(defun ptimes1 (*x* y) (prog (u* v)
			       (setq v (setq u* (ptimes2 y)))
			  a    (setq *x* (cddr *x*))
			       (cond ((null *x*) (return u*)))
			       (ptimes3 y)
			       (go a)))

(defun ptimes2 (y) (cond ((null y) nil)
			 (t (pcoefadd (plus (car *x*) (car y))
				      (ptimes (cadr *x*) (cadr y))
				      (ptimes2 (cddr y))))))

(defun ptimes3 (y) 
  (prog (e u c) 
     a1 (cond ((null y) (return nil)))
	(setq e (+ (car *x*) (car y)))
	(setq c (ptimes (cadr y) (cadr *x*) ))
	(cond ((pzerop c) (setq y (cddr y)) (go a1))
	      ((or (null v) (> e (car v)))
	       (setq u* (setq v (pplus1 u* (list e c))))
	       (setq y (cddr y)) (go a1))
	      ((= e (car v))
	       (setq c (pplus c (cadr v)))
	       (cond ((pzerop c) (setq u* (setq v (pdiffer1 u* (list (car v) (cadr v))))))
		     (t (rplaca (cdr v) c)))
	       (setq y (cddr y))
	       (go a1)))
     a  (cond ((and (cddr v) (> (caddr v) e)) (setq v (cddr v)) (go a)))
	(setq u (cdr v))
     b  (cond ((or (null (cdr u)) (< (cadr u) e))
	       (rplacd u (cons e (cons c (cdr u)))) (go e)))
	(cond ((pzerop (setq c (pplus (caddr u) c))) (rplacd u (cdddr u)) (go d))
	      (t (rplaca (cddr u) c)))
     e  (setq u (cddr u))
     d  (setq y (cddr y))
	(cond ((null y) (return nil)))
	(setq e (+ (car *x*) (car y)))
	(setq c (ptimes (cadr y) (cadr *x*)))
     c  (cond ((and (cdr u) (> (cadr u) e)) (setq u (cddr u)) (go c)))
	(go b))) 

(defun pexptsq (p n)
	(do ((n (quotient n 2) (quotient n 2))
	     (s (cond ((oddp n) p) (t 1))))
	    ((zerop n) s)
	    (setq p (ptimes p p))
	    (and (oddp n) (setq s (ptimes s p))) ))

(defun setup nil
  (putprop 'x 1 'order)
  (putprop 'y 2 'order)
  (putprop 'z 3 'order)
  (setq r (pplus '(x 1 1 0 1) (pplus '(y 1 1) '(z 1 1)))) ; r= x+y+z+1
  (setq r2 (ptimes r 100000)) ;r2 = 100000*r
  (setq r3 (ptimes r 1.0)); r3 = r with floating point coefficients
  )
; time various computations of powers of polynomials, not counting
;printing but including gc time ; provide account of g.c. time.

; The following function uses (ptime) for process-time and is thus
;  Franz-specific.

(defmacro ptime () '`(,(runtime) ,(status gctime)))

(defun bench (n)
  (setq start (ptime)) ;  Franz ticks, 60 per sec, 2nd number is GC
  (pexptsq r n) 
  (setq res1 (ptime))
  (pexptsq r2 n)
  (setq res2 (ptime))
  ; this one requires bignums.
  (pexptsq r3 n)
  (setq res3 (ptime))
  (list 'power=  n (b1 start res1)(b1 res1 res2)(b1 res2 res3)))
(defun b1(x y)(mapcar '(lambda(r s)(quotient (float (- s r)) 1000000.0)) x y))

;instructions:
;  after loading, type (setup)
; then (bench 2) ; this should be pretty fast.
; then (bench 5)
; then (bench 10)
; then (bench 15)
;... 

∂23-Apr-81  1245	RPG  	Franz benchmark    
To:   lisptranslators at SU-AI   
The FRANZ benchmark is to referred to as: FRPOLY.
			-rpg-

∂24-Apr-81  1324	Bernard S. Greenberg       <Greenberg at MIT-Multics> 	Re: FRANZ Benchmark, Multics Numbers    
Redistributed-Date:  24 April 1981 16:24 est
Redistributed-By:  Greenberg.Symbolics at MIT-Multics
Redistributed-To:  lisptranslators at SU-AI, rpg at SU-AI
Date:     23 April 1981 2059-est
From:     Bernard S. Greenberg       <Greenberg at MIT-Multics>
Subject:  Re: FRANZ Benchmark, Multics Numbers
To:       RPG at SU-AI
Cc:       lisptranslators at SU-AI

Here they are, and not bad at all.  Bear in mind that Multics Lisp
represents all fixna and flona as immediate, thus has no PDL
numbers, pdlnmks, number consing, etc.  Bigna are allocated
contiguously....  Code was compiled, using installed system
backquote and an adhoc defmacro definition. Wasn't clear if
RPG's input numbers were decimal or octal, so I did it both ways:

lisp
*
(load 'gab2)
t
(setup)
(z 1 1.0 0 (y 1 1.0 0 (x 1 1.0 0 1.0)))
(bench 2)
(power= 2 (0.016692 0.0) (0.015114 0.0) (0.015725 0.0))
(bench 5)
(power= 5 (0.150491 0.0) (0.212428 0.0) (0.154568 0.0))
(bench 10)  ;=8
(power= 10 (0.968238 0.184816) (1.71576 0.389726) (0.99761099 0.187837))
(bench 10.) ;decimal
(power= 12 (2.000796 0.405341) (3.569996 0.880229) (1.883108 0.231459))
(bench 15) ;octal = 13.
(power= 15 (6.563067 1.148998) (13.168704 2.515469) (6.694873 1.155386))
(bench 15.) ;decimal
(power= 17 (12.532608 1.85896) (27.568518 5.391129) (12.636826 1.860995))
(quit)

hmu

Multics 35.0a, load 42.0/120.0; 42 users, 27 interactive, 12 daemons.
Absentee users 1/3 (+2 FG)

∂24-Apr-81  1414	RPG  	Errata   
To:   lisptranslators at SU-AI   
In the FRPOLY benchmarks, the calls should be:
	(SETUP)	
	(BENCH 2.)
	(BENCH 5.)
	(BENCH 10.)
	(BENCH 15.)
			-rpg-

∂24-Apr-81  1608	CSVAX.jkf at Berkeley 	octal vrs decimal
Date: 24 Apr 1981 15:52:49-PST
From: CSVAX.jkf at Berkeley
To: lisptranslators@su-ai
Subject: octal vrs decimal

 I can't see any reason for using octal for input or output in any benchmark.
Why don't we just make it a rule that all numbers are decimal to reduce
confusion in the future.

∂25-Apr-81  1242	Greenberg.Symbolics at MIT-Multics 	Re: octal vrs decimal   
Date:  25 April 1981 15:39 est
From:  Greenberg.Symbolics at MIT-Multics
Subject:  Re: octal vrs decimal
To:  CSVAX.jkf at BERKELEY
cc:  lisptranslators at SU-AI
In-Reply-To:  Message of 24 April 1981 18:52 est from CSVAX.jkf

Why dont we put trailing dots on numbers so no-one has a chance to blow it?

∂25-Apr-81  1320	Vanmelle at SUMEX-AIM 	Re:  Re: octal vrs decimal 
Date: 25 Apr 1981 1315-PST
From: Vanmelle at SUMEX-AIM
Subject: Re:  Re: octal vrs decimal
To:   Greenberg.Symbolics at MIT-MULTICS, CSVAX.jkf at BERKELEY
cc:   lisptranslators at SU-AI

 In response to the message sent 25 Apr 1981 1243-PST by 

I'm with jkf.  Lisp is not assembly language; why should anyone have
to qualify decimal integers?  Adding decimal points could lead to
additional confusion--in at least one dialect (Interlisp), decimal
points imply floating-point.
-------

∂25-Apr-81  1727	Greenberg.Symbolics at MIT-Multics 	Re:  Re: octal vrs decimal   
Date:  25 April 1981 20:25 est
From:  Greenberg.Symbolics at MIT-Multics
Subject:  Re:  Re: octal vrs decimal
To:  Vanmelle at SUMEX-AIM
cc:  CSVAX.jkf at BERKELEY, lisptranslators at SU-AI
In-Reply-To:  Message of 25 April 1981 16:15 est from Vanmelle

The issue here is not language design or what
Lisp oughtta be; the issue is minimizing confusion in
translation between dialects and making the rules clear.

∂25-Apr-81  2210	CSVAX.jkf at Berkeley 	Re:  Re: octal vrs decimal 
Date: 25 Apr 1981 21:34:31-PST
From: CSVAX.jkf at Berkeley
To: Greenberg.Symbolics@MIT-Multics
cc: lisptranslators at SU-AI, Vanmelle at SUMEX-AIM
Subject: Re:  Re: octal vrs decimal
In-reply-to: Your message of 25 Apr 1981 1725-PST (Saturday).

    Date: 25 Apr 1981 1725-PST (Saturday)
    From: Greenberg.Symbolics@MIT-Multics
    Subject:  Re:  Re: octal vrs decimal
    To:  Vanmelle at SUMEX-AIM
    cc:  CSVAX.jkf at BERKELEY, lisptranslators at SU-AI
    In-Reply-To:  Message of 25 April 1981 16:15 est from Vanmelle

    The issue here is not language design or what
    Lisp oughtta be; the issue is minimizing confusion in
    translation between dialects and making the rules clear.

Right,  we should make the conversion between dialects as easy as possible,
and that is why I suggest that we make decimal the default.  I don't know
of any dialect which can't understand decimal.  Our lisp system (Franz)
at least only handles octal on input not on output (since we've never found
any need for it).

∂24-Apr-81  1411	CSVAX.jkf at Berkeley 	franz timing results  
Date: 24 Apr 1981 13:22:17-PST
From: CSVAX.jkf at Berkeley
To: rpg@su-ai
Subject: franz timing results
Cc: CSVAX.fateman@Berkeley, CSVAX.jkf@Berkeley

							frpoly benchmark 
							submitted by j foderaro
							(csvax.jkf@berkeley)
							24 april 81

Here are the results or running the pairs benchmark on Franz Lisp on 
a VAX 11/780 runing Berkeley 4BSD Unix.  The load average was less
than one when the timing was done.  These results supersede the timings
you have which were made by us in March.



Script started on Fri Apr 24 06:04:12 1981
Reval: lisp
Franz Lisp, Opus 34
-> (load 'frpoly)
[fasl frpoly.o]
t
-> (setup)
(z 1 1.0 0 (y 1 1.0 0 (x 1 1.0 0 1.0)))
-> (bench 2)
(power= 2 (0.0167 0.0) (0.0333 0.0) (0.0 0.0))
-> (bench 5)
(power= 5 (0.15 0.0) (0.75 0.4333) (0.3833 0.2166))
-> (bench 10)
(power= 10 (2.8167 1.05) (8.2333 3.3) (3.2 1.2333))
-> (bench 15)
(power= 15 (18.2333 5.35) (92.0333 41.6333) (18.8 5.1333))
-> 
script done on Fri Apr 24 06:07:48 1981

∂28-Apr-81  1122	Vanmelle at SUMEX-AIM 	Re: Benchmarks        
Date: 28 Apr 1981 1118-PDT
From: Vanmelle at SUMEX-AIM
Subject: Re: Benchmarks    
To:   RPG at SU-AI

 In response to your message sent 28 Apr 1981 0625-PDT

Sure.  I can run them both on a 2020 (Sumex2020) and a 2060 (Score).
Has someone already done th e translation?

	Bill
-------

I don't think so. The only tricky parts are the more-than-5-args problem
and catch/throw. I've faked catch and throw in InterLisp with the spaghetti
stuff. Let me try to locate what I did and show you. You may luck out because
at one point I had a MacLisp=>InterLisp translator that I used. I'll also
try to dig that one up and see what it does to the benchmark.
			-rpg-
∂28-Apr-81  2115	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Re: FRANZ Benchmark        
Date: 28 Apr 1981 2310-CDT
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Re: FRANZ Benchmark    
To: RPG at SU-AI
In-Reply-To: Your message of 23-Apr-81 1432-CST


    Mail from SU-AI rcvd at 24-Apr-81 0002-CST
    Date: 23 Apr 1981 1232-PST
    From: Dick Gabriel <RPG at SU-AI>
    Subject: FRANZ Benchmark    
    To:   lisptranslators at SU-AI   

      .......

    Here's my transcript of SAIL MacLisp:

    (setup)
    (Z 1 1.0 0 (Y 1 1.0 0 (X 1 1.0 0 1.0))) 
    (bench 2)
    (POWER= 2 (0.017 0.0) (0.017 0.0) (0.016 0.0)) 
    (bench 5)
    (POWER= 5 (0.116 0.0) (1.334 1.084) (0.15 0.0)) 
    (bench 10)
    (POWER= 10 (2.534 1.8) (19.733 17.151) (8.983 7.901)) 
    (bench 15)
    (POWER= 15 (16.65 8.832) (112.516 89.298) (63.9 56.749)) 

    Which I ran compiled. Times are in seconds.
	.......


I have a question about the results you show.  The problem I see is that
the difference between the first result of (BENCH 15) and the third
result of (BENCH 15).  If I am not wrong, the runtimes after subtracting
out the (rather large) GC times is 7.818 vs 7.151.  The difference in
the two examples is that the first one uses integer coefficients while the
other uses the real number version of the same integer.  That indicates
to me that your machine does real-number multiplication faster than it
does integer multiplication???   Note that the results for the smaller
benchmarks give the expected result that the integer multiplication is
faster than real number multiplication.

I am curious about this.  Are you sure of your results?  Do you get the
same results if you increase your free space (or whatever) so that the
amount of GC decreases to something much less than 80% of your total
time.  If the real number version continues to run faster, I would be
very interested in why it is faster.
-------

FRPOLY at SAIL
The transcript was made blindly to give some rough idea of the runtime.
I'll run the entire thing again today and send out the real results.
			-rpg-
∂02-May-81  1245	Mabry Tyson <ATP.Tyson at UTEXAS-20> 	Re: FRANZ Benchmark        
Date:  2 May 1981 1437-CDT
From: Mabry Tyson <ATP.Tyson at UTEXAS-20>
Subject: Re: FRANZ Benchmark    
To: RPG at SU-AI
In-Reply-To: Your message of 23-Apr-81 1432-CST

Here is the results of the FRPOLY benchmark.   I did notice that compilation
of this benchmark resulted in a more of a speedup that it did on the
SCCPP benchmark.

My comment in the notes at the end mentions that UCI-Lisp doesn't have a
BIGNUM package.  I don't know that for sure.  The assembly language source
has a number of hooks for a BIGNUM package but I don't know where that package
is.

		Results of LISP timing tests for UCI-Lisp

(Times are in the form R+G where R is the runtime (not including GC time)
and G is the GC time.  All times are in seconds.  "Interp" means interpreted,
"Slow" means compiled but using UUO-links, "Fast" means compiled with the UUO
links replaced by direct jumps.)


				   Processor
Program			KL-2060			KI-1060
	   Interp	Slow	  Fast	     Interp	  Slow		Fast

FRPOLY:
 Free:			 100000.		75000.
(bench	 0.719+0     0.142+0	0.043+0	     2.627+0	 0.576+0     0.181+0
  2)	 0.677+0     0.142+0	0.047+0	     2.619+0	 0.545+0     0.168+0
First	 0.677+0     0.141+0	0.042+0	     2.698+0	 0.580+0     0.166+0
 result	 0.687+0     0.140+0	0.043+0			 0.612+0     0.155+0
	----------------------------------------------------------------------
Third	 0.706+0     0.162+0	0.063+0	     2.585+0	 0.630+0     0.256+0
 result	 0.830+0     0.164+0	0.063+0	     2.798+0	 0.610+0     0.227+0
	 0.702+0     0.162+0	0.062+0	     2.733+0	 0.695+0     0.252+0
	 0.700+0     0.162+0	0.065+0			 0.593+0     0.215+0
	======================================================================
(bench	 5.88+0      1.166+0	0.343+0	    22.25+0	 4.384+0     1.451+0
  5)	 5.696+0     1.142+0	0.355+0     21.87+0	 4.462+0     1.297+0
First	 5.706       1.146+0	0.338+0			 4.719+0     1.500+0
 result		     1.18+0	0.351+0
	----------------------------------------------------------------------
Third	 5.891+0     1.343+0	0.523+0	    23.04+0	 4.964+0     2.097+0
 result	 5.880+0     1.383+0	0.51+0	    21.64+0	 5.084+0     2.065+0
	 5.884+0     1.345+0	0.522+0			 5.093+0     2.048+0
		     1.341+0	0.514+0
	======================================================================
(bench	122.2+1.1   25.48+1.02  8.63+1.04	--          --	    31.91+2.12
  10)		    25.14+0.98	8.42+1.02
First		    25.53+1.03	8.47+1.01
	----------------------------------------------------------------------
Third	126.4+2.2   28.17+2.02 11.57+2.04	--	    --	    39.07+6.07
 result		    28.26+2.03 11.54+2.04
		    28.18+2.04 11.28+1.98
	======================================================================
(bench	  --	    39.22+2.16 12.59+2.00	--	    --		--
  15)			       12.98+2.06
First
	----------------------------------------------------------------------
Third	  --	    43.46+3.08 17.22+3.02	--	    --		--
 result			       17.58+3.05
	======================================================================

Note:  The results referred to as the first result is the result obtained
as the first value returned by BENCH.  This is the value computed using
integer coefficients.  The result referred to as the third result is the
third value returned by BENCH (for real number coefficients).  UCI Lisp
does not have the bignum package so it could not compute the second
result returned by the BENCH routine.

-------

∂04-May-81  1326	correira at UTEXAS-11  	UTLISP benchmarks    
Date:  4 May 1981 at 1516-CDT
From: correira at UTEXAS-11 
Subject: UTLISP benchmarks
To: rpg at su-ai
cc: correira

Hi! I have been very busy of late so I was only able to get to the
benchmarks late last week.  I have three files to send and I would
like to know whether I should send them directly to you or to
LISPTIMINGS:

1) an overview of UTLISP 5.1 and the OS I am running the benchmarks
under (UT-2D),

2) the timings for SCCPP,

3) the timings for FRPOLY.

Sorry it took so long to get around to it.

					Sincerely,
					Alfred Correira
-------

Normally they are sent to LISPTRANSLATORS, which has the effect
of goading other translators into doing their jobs!
			-rpg-
∂05-May-81  0643	correira at UTEXAS-11  	SCCPP Timings for UTLISP  
Date:  5 May 1981 at 0835-CDT
From: correira at UTEXAS-11 
Subject: SCCPP Timings for UTLISP
To: lisptranslators at su-ai

Following are the timings for SCCPP for UTLISP 5.1. Because of local
interactive field length (= core size) restrictions, all runs were
submitted as batch jobs. "runtime" does not include "gctime".


Interpreted:

   run#:       #1        #2        #3        #4        #5

runtime:     18.15     18.17     18.25     18.22     18.26
 gctime:      1.34      1.37      1.34      1.35      1.34

Run at 200000 (octal) field length with: 45300. words of free space
                                          3030. words of full space
There were three garbage collections for each run.


Compiled:

   run#:       #1        #2        #3        #4        #5

runtime:      3.95      4.09      4.06      4.10      4.03
 gctime:       .97       .82       .84       .82       .83

Run at 20000 (octal) field length with: 45524. words of free space
                                         2957. words of full space
There were three garbage collections for each run.
-------

∂05-May-81  0643	correira at UTEXAS-11  	FRPOLY Timings for UTLISP 
Date:  5 May 1981 at 0836-CDT
From: correira at UTEXAS-11 
Subject: FRPOLY Timings for UTLISP
To: lisptranslators at su-ai

Following are the results for FRPOLY under UTLISP 5.1.  The runs at
75000 (octal) were run interactively; the remainder were submitted
as batch jobs. "runtime" does NOT include "gctime".

Interpreted:

bench 2: (runtime+gctime)

R:     1.168+0     1.168+0     1.149+0     1.147+0
R2:    1.181+0     1.162+0     1.171+0     1.174+0
R3:    1.175+0     1.170+0     1.171+0     1.179+0

bench 5: (runtime over gctime)

R:     9.910       9.917       9.868       9.904
        .156        .145        .152        .148
R2:    5.237       5.261       5.245       5.237
        .162        .156        .157        .156
R3:    9.930       9.899       9.960       9.927
        .323        .315        .311        .300

bench 10:

R:   213.160
       4.205
R2:    7.136
        .168
R3:  213.650
       3.994

bench 2 and bench 5 were run at a field length (= core size) of
75000 (octal) words with:  free space of 7500. words
                           full space of 2583. words
bench 2 required no garbage collections; bench 3 required 4 garbage
collections for each run.

bench 10 was run at a field length of 200000 (octal) words with:
                           free space of 41706. words
                           full space of 6685. words
bench 10 required 38 garbage collections.
For obvious reasons, I did not run bench 15.


Compiled:

bench 2:

R:      .173        .139        .153        .149
       0.          0.           .108       0.
R2:     .165        .167        .156        .150
       0.           .115       0.          0.
R3:     .155        .154        .165        .183
       0.          0.          0.          0.

bench 5:

R:     1.406       1.361       1.353       1.327
        .328        .356        .366        .385
R2:     .897        .872        .857        .861
        .159        .253        .257        .260
R3:    1.430       1.372       1.382       1.375
        .325        .395        .396        .269

bench 10:

R:    30.043      30.009      30.016
       3.989       3.866       4.010
R2:    1.219       1.172       1.218
        .143        .179        .155
R3:   30.495      30.509      30.528
       4.015       3.896       3.916

bench 15:

R:    46.046      46.030
       7.346       7.330
R2:    2.120       2.122
        .171        .174
R3:   46.945      46.736
       7.077       7.264

bench 2 and bench 5 were run at a field length of 75000 (octal)
words with:  free space of 14177. words
             full space of 1554. words
bench 2 required 0 or 1 garbage collections per run; bench 5
required 7 to 8 garbage collections per run.

bench 10 and bench 15 were run at a field length of 200000 (octal)
words with:  free space of 42913. words
             full space of 6859. words
bench 10 required 37 garbage collections per run; bench 15 required
63 garbage collections.

As you can see from the R2 results, there are times when a 60 bit word
size can come in handy.
-------

∂05-May-81  0643	correira at UTEXAS-11  	A thumbnail sketch of UTLISP   
Date:  5 May 1981 at 0821-CDT
From: correira at UTEXAS-11 
Subject: A thumbnail sketch of UTLISP
To: lisptranslators at su-ai

Below is a short description of UTLISP and the OS I am running the
benchmarks under, provided for those of you (which is probably just
about everybody) who either have never seen this LISP dialect or
whose only exposure has consisted of denigratory comments scribbled
on bathroom walls.

Alfred Correira

-------------------------------------------------------------------

UTLISP is a LISP dialect for Control Data Corportaion 6000, Cyber
70, and Cyber 170 class machines. It is currently running at over
50 sites at some level (from  UTLISP3  to  UTLISP5.1)  and  under
various  operating  systems  (NOS,  NOS/BE,  KRONOS,  SCOPE,  and
several home-brews, including our local UT-2D).  UTLISP is  based
on  Blue  Book  Lisp; the first crude implementation goes back to
1967 as a project sponsored by W. W.  Bledsoe.   Extensive  revi-
sions  were  made  in  the  late  60s  and early 70s by Dr. E. M.
Greenawalt, and by Bob Amsler, Jonathan Slocum, and Mabry  Tyson,
culminating  in  Version  4.  Version 5 was the product of my ef-
forts in 1979-1980.  The current system consists  of  the  inter-
preter,  a compiler, LAP, and runtime support utilities.  Most of
UTLISP is written in COMPASS (the assembly language  of  the  CDC
mainframes)  although  much of the runtime support stuff released
with Version 5 is coded in LISP.  The compiler is  an  old,  can-
tankerous,  and obscure master's project from around 1971 that is
not frequently used - most people who use UTLISP  depend  on  the
speed  of the underlying hardware to make up for this.  UTLISP is
not used much for system-building; Bledsoe's theorem  prover  was
about  the last big project written in UTLISP that I know of, and
it was converted to UCILISP years ago due to the primitive nature
of  UTLISP at that time and the memory constraints imposed by our
OS.

Atoms in UTLISP are 60 bit quantities, including 3-18 bit  fields
(CAR,  CDR,  CSR)  plus flag bits.  The CAR usually points at the
atom itself or to previous values of the atom  (i.e.  values  are
maintained  as  stacks  on the atoms themselves rather than being
pushed onto the runtime stack). The CDR field usually  points  at
the  current  value  of the atom, and the CSR field points at the
atom's property list.   Thus  UTLISP  uses  shallow  binding  for
storing/retrieving  atom  values.  There are no bignums (although
these are not usually necessary anyway with a 60 bit  word  size)
or  smallnum  (inums,  etc.  -  i.e.  no  encoding  of numbers in
pointer fields). The garbage collection scheme is mark and sweep,
the mark phase is based on Algorithm E in Chapter 2 of Knuth.

UTLISP 5.1 has most of the  traditional  accoutrements  (property
lists,  arrays,  funargs  that  work,  macros, read/print macros,
etc.), plus a  reader  with  flexible  lexical  classes,  virtual
memory  that allows automatic retrieval of code from disk for in-
terpreted functions, overlays, interrupts, etc.  It also has  the
essential  runtime  supports:  editor, BREAK, a MAKEFILE package,
Help,  dynamic  memory  support,  etc.  The  editor,  BREAK,  and
MAKEFILE  (DSKOUT)  packages  use  the  corresponding features in
UCILISP as models (in particular, the UTLISP editor is a slightly
scaled-down  version of the UCILISP editor).  UTLISP lacks: abort
error recovery, ASCII (except for the Help system), strings,  ef-
ficient  vectors, a binary loader for compiled code, LEXPRS, etc.
UTLISP does provide extensive error-checking for the arguments of
system functions, and random and binary I/O.

I will mention only the OS that I am running the benchmarks on  -
UT-2D.   UT-2D  is  a  home-grown  OS  that  runs on a Dual Cyber
170/750 system.  Each Cyber has 256k of  central  memory  and  20
peripheral   processors.    They   communicate  through  492k  of
extended-core storage. There are 3 844-4 disk systems (957  mega-
characters  each)  and  2  885  disk systems (2768 megacharacters
each).  Each CPU has a 400 nanosecond cycle time with  a  maximum
transfer  rate  of one word each 50 nanoseconds; memory is organ-
ized into eight independent banks.  Some  disk  numbers:  average
seek  time  of  30  msec. for the 844s and 25 msec. for the 885s,
transfer  rate  of  6.45  megabits/sec  for  the  844s  and  9.58
megabits/sec.  for  the  885s.   UT-2D  itself is non-paged, non-
virtual anything.  Programs are restricted to 128k batch and  32k
interactive presently although the interactive memory size is due
to increase to 128k this summer with the installation of some new
semiconductor memory replacing the current extended-core storage.
The system supports about 170 conversational users  through  MOD-
COMP  III  and  Interdata 8/32 front-ends plus a full batch load.
One Cyber is usually dedicated to running batch jobs and the oth-
er runs conversational jobs.

As hardware for UTLISP goes, the Cyber 170/750  is  not  terribly
friendly.  There is no hardware support for recursion or environ-
ment manipulation.  There are no machine instructions for direct-
ly   accessing   the   fields   of   a   LISP   atom.    The  CPU
pipeline/instruction  stack is large and  slow  to  clear  during
jumps, which UTLISP does frequently and with mad abandon.

UTLISP is available to all who want it.  The  user  community  is
divided  primarily  into  classroom use to teach LISP/AI concepts
and a fair number of sites wanting UTLISP in order to run  the  U
of Utah's REDUCE system.



-------

∂26-May-81  0916	George J. Carrette <GJC at MIT-MC> 	benchmark.    
Date: 26 May 1981 12:16-EDT
From: George J. Carrette <GJC at MIT-MC>
Subject: benchmark.
To: RPG at MIT-MC

Files: "MC:LIBDOC;BENCH >"
       "MC:SCHEME;SCAM >"
       "MC:SCHEME;CHURCH >"

Test on MC is in "MC:SCHEME;SCAM KL10"

I'll have a LISPM and NIL test soon too.


∂09-Aug-81  1912	RPG   via CMU-20C 	Vacation   
To:   lisptiming at SU-AI   
In case some of you are wondering what is up with the Lisp timing
project, it is temporarily on vacation while I am at CMU working on S-1 Lisp.
In the fall, arpa is partially funding the effort and I will have a grad
student to translate many of the benchmarks for the various sites.
See you in the fall.
			-rpg-

∂20-Oct-81  1527	LYNCH at USC-ISIB 	Benchmarks for Interlisp-VAX   
Date: 20 Oct 1981 1522-PDT
From: LYNCH at USC-ISIB
Subject: Benchmarks for Interlisp-VAX
To:   Rindfleisch at AIM, Feigenbaum at AIM, RPG at SAIL,
To:   Pratt at AIM, Masinter at PARC, Balzer at ISIF,
To:   csvax.fateman at BERKELEY, CBF at MIT-MC, vanmelle at AIM,
To:   Schoen at AIM, CSD.Novak at SCORE
cc:   DDyer, RBates, Voreck, Saunders, Lynch

Isn't there a quote somewhere about "lies, damned lies and statistics"?
Benchmarks of complicated programming environments like Lisp are 
probably in an even less illustrious position.  We took
the TAK function and ran it on our Interlisp-VAX for the 11/780 we 
have here at ISI.  Dave Dyer just typed in the little program and
compiled it and ran it and it took 10.5 seconds of CPU time
to run for the case TAK(18. 12. 6.).  He then replaced GREATERP by
IGREATERP and it took 5.0 seconds.  That is what it did today.
We are in the process of doing a complete rewrite of the compiler
(the existing one was done in a hurry with the aim of getting something
up quick that is "correct") and expect some gains in execution 
speed to be made when it is done.  CAn we tune it to make TAK go
much faster?  Sure!  But we probably won't make that specific
attempt.  What matters to programmers?  Speed or habitability?
The answer is of course: both.  ISI has aimed for habitability
and completeness first.  WE are about there.  Now we go for speed.
That will take a long time as all system developers know.
For Interlisp on the VAX there is the issue of having it
behave "well" in the timesharing environment as opposed
to taking over the whole machine in a single user environment.
At this point we have ignored that issue (assumed the single user
enviornment) and expect that the loudest cries from new users
will come from their bretheren who are unlucky enough to be
on the same machine with them at the same time.  Not at
all unlike the current situation with PDP-10s, eh?

Does anyone wish to nominate a set
of meaningful benchmarks that most of us can code up
and run?  Or will they each generate more questions than 
answers?

Dan 
-------

∂20-Oct-81  1614	Doug Lenat <CSD.LENAT at SU-SCORE> 	Save the Dolphins  
Date: 20 Oct 1981 1613-PDT
From: Doug Lenat <CSD.LENAT at SU-SCORE>
Subject: Save the Dolphins
To: pratt at SU-HPP-VAX, balzer at USC-ISI, masinter at PARC-MAXC,
    lynch at USC-ISIB, feigenbaum at SUMEX-AIM, rindfleisch at SUMEX-AIM,
    rpg at SU-AI, csd.novak at SU-SCORE
cc: csd.lenat at SU-SCORE

After carefully isolating myself from the Dolphin versus X  controversy
up until now, I feel I must at least send a brief note on behalf of the
cetaeceans.  

First of all, what is this business of comparing "reported cpu time" on
large machines with elapsed time on small ones?  What happened to
time for garbage collection, changing virtual images, etc. on the big
machines?

Second of all, where do I go to buy time on a 2060 with load avg of 1?
Most of the big machines I know crowd up to the point where they are
just barely not usable for ANY Interlisp jobs during the day.

Third of all, where do you spend your typical daytime moments
when coding?  95% of my time goes into editing, browsing through
old functions, debugging my code on tiny-cpu-time-usage calls on
EVAL (usually from within the editor, which is something
MACLISP lacks, of course).  For all of these activities, the
Dolphin has proven itself to me (over the past year) to be
at least as good as an unloaded 2060 (yes, I can find them at night),
and in many ways superior.  Superiority comes from tiny places
(big screen, plus decent window packages, mean less groping and
redisplaying), from the complete absence of timesharing (I never
see a delay during editing on a DOlphin, but I really do notice
one when I go back and use a 2060, even with load avg in the 1-2 range),
and from the predictability of the response of the machine (I never
have to come in and bemoan the fact that the system is down, or
that it is crowded much more/less than I expected, etc.)

Fourth of all, one can get to know what things take a long time
on Dolphins (making and loading files, running interpreted code)
and minimize doing them.  One approach is to type CLEANUP) just
before going home each night.  I have left my Dolphin running
for several days and nights straight, doing computations, with no
worry about the machine crashing on me.  (At least, I wasn't
concerned UNTIL Larry Masinter was impressed that I live like that!)
Anyway, the only runs where one cares overmuch about cpu time
can be done on compiled code.  Gordon Novack told me that his
run was done on interpreted code. The fraction
interp-running-time/compiled-running-time is much larger for
Dolphins than for other machines, so it is not surprising that
a large interpreted program languishes on the Dolphin.

Fifth, I have been amazed at the responsiveness of Masinter, Sheil,
and the other Interlisp-D group folk, when faced with
complaints.  They have built up a very high credibility with me
by now, and I take their predictions about future improvements
quite seriously.  The Dolphin has grown much better over the year
or so I've used it.  The slowness of the interpreter is one of
the problems they intend to correct -- and the slow dealings with
files should be ameliorated as well.

Sixth, what is this fuss about CONSing?  I don't get paid by the
size of the data structures I produce, but with the appropriateness
of their contents.  Most of my time is spent doing
GET, PUT, CAR, CDR, and logical operations (including COND).
Maybe only 2-4 per cent of my time is spent in CONSing.
The complaint about slow function calling is valid, and is another
item near the top of the Dolphin group's stack.

Seventh is machine reliability.  it is.  As I said, I use it
for periods of days at a time between sysouts with
little worry.  Mine has never broken down, nor has anyone ever
commented to me that theirs has.  Bugs in Interlisp-D itself
were common a year ago, a weekly occurrence six months ago,
and virtually nonexistent now.  There is a bad phenomenon
called sandbarring that occasionally slows the printing down
to a crawl, and that is probably one of the chief culprits
in Novack's results.  This occurs during the running of interpreted
code, and is at the top of the Dolphin lisp group's stack to fix.

Eighth and finally, I spend most of my programming time on
Dolphins.  I have acess to several machines of various types,
and often could log in to a nearly-empty machine with
INTERLISP-10 if I chose to, but I do not.  The quality of life
on the Dolphin is too high for me to consider switching back.
I have read notes where folks glibly equate N Dolphins to a
KL10 or a 2060; figurs of N=25 or 40 have been mentioned.
For my money, and my time, a figure of about N=3 is right.

Doug Lenat

PS: I could supply timing data on Eurisko, showing it to run
about a factor of 4-5 times slower on a Dolphin than on a 2060,
but that would tend to obscure the majority of the 8 points above,
and would lend an authenticity and weight to the other recent
"benchmarks" that they do not deserve.  It's not that mine would
be more accurate, just that NONE of the tests we're doing is
accurate enough -- using different flag options to the compiler
has produced runtime difference sof over an order of magnitude
ona  2060 in runtime of the final compiled code.
-------

∂20-Oct-81  1744	pratt@Diablo (SuNet) 	Benchmarks for Interlisp-VAX
Date: 20 Oct 1981 17:36:24-PDT
From: pratt at Diablo
To: Feigenbaum@AIM, LYNCH@USC-ISIB, RPG@SAIL, Rindfleisch@AIM, balzer@isif,
    cbf@mc, csd.novak@score, csvax.fateman@berkeley, masinter@parc,
    schoen@aim, vanmelle@aim
Subject: Benchmarks for Interlisp-VAX
Cc: DDyer@usc-isib, RBates@usc-isib, Saunders@usc-isib, Voreck@usc-isib

A recent Score bboard message of mine describing the results of some 
benchmarks had a subject line of "Lies, damned lies, and benchmarks."
Benchmarks, like taxes and exams, are unjust, unpopular, but unavoidable.

Here is an excerpt from a note I sent a few days ago to the Stanford Computer 
Science Department Computing Facilities Committee, spelling out the criteria 
I apply to benchmarks.  Note that the criteria are not meant to yield the 
"universal benchmark," which does not exist, but rather to yield programs 
whose behavior on a machine will be suggestive of that machine's day-to-day 
performance.

begin excerpt
-----------------

Here are the criteria I have been using to date in choosing Lisp
benchmarks.

1. The benchmark should solve a problem whose computer solution is frequently 
called for.

2. The programming style used should be one that helps make programs more 
writable, readable, maintainable, and portable.

3.  Subject to criterion 2, the benchmark should implement an efficient 
solution to the problem it solves.

Criterion 1 is to avoid the complaint commonly made about some benchmarks 
that they do not test real world problems.  Criterion 2 attempts to live up
to standards recommended by software engineers based on relative costs of
people and hardware, a ratio that programmers continue to have difficulty
believing in and adjusting to.  Criterion 3 supplements criterion 1 by weeding
out programs that would not arise in practice on account of being too
inefficient.  (For the most realistic benchmarks, criterion 3 should receive
the same degree of emphasis as in typical programming; since this emphasis is
highly variable the criterion may safely be interpreted somewhat vaguely.)

Customer benchmarks can afford to meet criterion 1 more narrowly than general
benchmarks, in that "wide use" can be construed to mean "heavily used by the
customer."  Thus for HPP purchases it makes sense to use HPP programs as
benchmarks, at least up to a point.  Current HPP programs give an accurate 
estimate for immediate applications, but this accuracy starts to drift as 
programs, programmers, and programming styles change.  Thus for long-range 
predictions, programs of general utility, in particular programs that have 
found frequent application over a period of many years in many places and can 
be expected to continue to show up as subroutines in many programs, make for 
more satisfactory benchmarks.

A machine that requires benchmarks not meeting criterion 1 in order to look
good is not tuned for real world problems.  If a programming style that 
defeats criterion 2 is needed for a machine not to look like a dog compared 
to other machines then that machine is not tuned for the economics of today's 
computing milieu.  Defeating criterion 3 to reduce the performance gap 
between machines is like holding Olympic track events in molasses.

----------------
end excerpt

I would add to these criteria the following advice on interpreting benchmark 
timings:

1.  Don't take any time seriously other than real (wall-clock) time.  If you
are on a time-shared computer, you have two variables to contend with: real
time and deviation from typical load.  A one-datum benchmark will measure
real time under typical load conditions, for more detail you can plot real 
time against load factor.  Personal computers are simpler with only one 
variable to worry about, real time.

2.  Don't treat factors of 2 seriously for a single benchmark.  You can 
easily find variations of this magnitude and more in the ratio of the 
performance of two machines, by changing programmer, language, and/or 
benchmark.  Differences between machines only start to become significant 
when you have observed them systematically over a representatively broad 
range of benchmarks, holding constant only those things you are actually 
trying to measure (such as the machine itself, or the machine and a 
particular compiler).

Three simple benchmarks I have been looking at recently were chosen from the 
areas of algebraic manipulation, formal logic, and sorting.  The respective 
functions are:

1.  deriv(exp) which computes the derivative with respect to X of the 
rational expression 'exp' (a rational expression is one involving variables, 
constants, PLUS, DIFFERENCE, TIMES, and QUOTIENT), without attempting any 
simplification of the result;

2.  dnf(wff) which converts the well-formed formula 'wff' (consisting of 
propositional variables combined with NOT, AND, and OR, where AND and OR may 
take any number of arguments) to disjunctive normal form (a list of lists of 
literals, where a typical literal is either the variable P or (NOT P));

3.  sort(L) which sorts list 'L' using merge sort, using lists rather than arrays
to hold intermediate values.

I have coded up my own Maclisp and Interlisp implementations of these and stored
them on <CSD.PRATT> at Score, with respectively .L and .IL suffixes (so that
e.g. the Interlisp sort is called SORT.IL).  I'd be interested in seeing
(a) these exact programs timed on other Lisps
(b) other implementations of these functions similarly timed.
The motivation behind (b) is to reduce the extent to which the benchmarks are
measuring me as well as the machines and compilers.  The reimplementations
should be run on all machines and compilers being benchmarked.

I would also like to see proposed other benchmarks that meet my three criteria
above.  Three benchmarks is a start, but do not begin to span the range of
real-world Lisp applications.  Note that the Takeuchi function fails two of my
criteria: 1 for obvious reasons, 3 because there is a closed form expression
for this function.  The usual recursive implementation of Fibonacci(n) would fail
the same two criteria, failing criterion 3 by being two exponentials slower than
a good algorithm.

It would be nice to include some very large benchmarks.  The main obstacle here is
the high cost of porting large programs between the machines being compared.  To
the extent that this obstacle can be overcome, large benchmarks are most welcome.

	Vaughan Pratt





∂21-Oct-81  0109	RPG  	Criterion 1   
To:   VRP at SU-AI, "#TIMING.MSG[TIM,LSP]" at SU-AI  

I doubt that criterion 1 has any special relevance except to
sound good (or to qualify for equal opportunity employment funds).
Here's why. What does anyone care how long the derivative function takes?
Because he (let's say) wants to do similar things in his program. What
similar things can a person want to do: Traverse list structure, make
a copy, and transform it in some way. The interesting components of
this are traversal, copying, and testing. Traversal involves car and
cdr accesses, copying involves cons, testing involves eq or a type test.

Since no one wants to do derivatives from scratch, the exact benchmark is
irrelevant unless it is totally typical or average. It is much better
to provide profiles of benchmarked or analyzed primitives or operations.
For example: cons, car access, cdr access, fixnum arithmetic, flonum
arithmetic, bignum arithmetic, function call, frame retention (InterLisp),
IO, array access, array creation, array updating, array copying, vector
access, vector creation, vector updating, vector creation, value passing,
multiple value passing, non-local control structures, variable access,
variable updating, function loading, special access, local access, binding
times, garbage collection of cons cells, arrays, fixnums, flonums, bignums,
cdr-coding efficiency, paging time, swapping time, compiling time, compiled
code speed versus interpreted code speed, hierarchy flattening time,
plist access, plist updating, function cell lookup, assignment, stack frame
creation time, etc.

Programs like deriv only test 4 of these in an untintuitive mix. Tak, at least,
is known to test 1 thing only: stack operations and the way the compiler
manages such things. It might also reveal optimizations that the compiler
can do. Tak, you claim, is uninteresting. Yet it tells more information
because it pinpoints its point of impact. Deriv, I'd say, essentially
does: it is CONS intensive.

I don't object to your benchmarks. It's just that I think you do a slight
disservice to the whole endeavor by claiming that you are choosing excellent
benchmarks when you are simply picking convenient ones or something.

Since the audience that we are aiming at shouldn't be solving problems
that are commonly solved, we should measure the various components
and possibly rank the Lisps and machines according to a well-defined
measure combining all aspects.
			-rpg-

∂17-Oct-81  2340	pratt@Diablo (SuNet) 	Fairness
Date: 17 Oct 1981 23:35:38-PDT
From: pratt at Diablo
To: equip, genesereth@score, novak@score
Subject: Fairness

(The following is in response to the Takeuc(h?)i benchmark from RPG.)

Here are the criteria I have been using to date in choosing Lisp
benchmarks.

1. The benchmark should solve a problem whose computer solution is frequently 
called for.

2. The programming style used should be one that helps make programs more 
writable, readable, maintainable, and portable.

3.  Subject to criterion 2, the benchmark should implement an efficient 
solution to the problem it solves.

Criterion 1 is to avoid the complaint commonly made about some benchmarks 
that they do not test real world problems.  Criterion 2 attempts to live up
to standards recommended by software engineers based on relative costs of
people and hardware, a ratio that programmers continue to have difficulty
believing in and adjusting to.  Criterion 3 supplements criterion 1 by weeding
out programs that would not arise in practice on account of being too
inefficient.  (For the most realistic benchmarks, criterion 3 should receive
the same degree of emphasis as in typical programming; since this emphasis is
highly variable the criterion may safely be interpreted somewhat vaguely.)

Customer benchmarks can afford to meet criterion 1 more narrowly than general
benchmarks, in that "wide use" can be construed to mean "heavily used by the
customer."  Thus for HPP purchases it makes sense to use HPP programs as
benchmarks, at least up to a point.  Current HPP programs give an accurate 
estimate for immediate applications, but this accuracy starts to drift as 
programs, programmers, and programming styles change.  Thus for long-range 
predictions, programs of general utility, in particular programs that have 
found frequent application over a period of many years in many places and can 
be expected to continue to show up as subroutines in many programs, make for 
more satisfactory benchmarks.

A machine that requires benchmarks not meeting criterion 1 in order to look
good is not tuned for real world problems.  If a programming style that 
defeats criterion 2 is needed for a machine not to look like a dog compared 
to other machines then that machine is not tuned for the economics of today's 
computing milieu.  Defeating criterion 3 to reduce the performance gap 
between machines is like holding Olympic track events in molasses.

With these criteria in mind, I'd like to defend myself against the objections
that have been raised about one of my Lisp benchmarks consing excessively.
That it was considered excessive surprised me since I thought I had 
implemented differentiation in a pretty standard Lisp style, and pretty 
efficiently at that, meeting all three of my criteria.

To see what happened in the "real world" I went over to Macsyma and took the 
derivative of the same expression used in my benchmark, 3*x*x+a*x*x+b*x+5.  
Macsyma performed around 300 conses, of which somewhere between 150 and 200 
appeared to be attributable to actually taking the derivative, the rest being 
due to parsing and other overhead.  My program performed only 61 conses, the 
lower number probably being attributable to my not attempting any 
simplification of the result.

Conclusion: I see no sustainable objection to my benchmark.

I might add that I chose it, along with two other benchmarks, purely using
my three criteria.  I had no prior expectations that it would exercise one
part of Lisp more than another, although I also did not expect that it would
serve as a universal benchmark.  There is no such thing as a universal 
benchmark; at best you can only hope to have a broad range of representative 
benchmarks.  This is why I had three benchmarks solving problems from
three distinct areas, algebraic manipulation, logic, and sorting.  Lack of time
has prevented me from covering yet more territory, and I am grateful for all
contributed benchmarks from other sources.  However if your contribution 
comes with the remark that I am being unfair in not having a sufficiently 
broad range of benchmarks (as did the Takeuchi benchmark) I will be rather 
piqued; I just don't have the resources to produce a sufficiently
representative range of benchmarks on my own.

I do not think that one should strive for fairness in benchmarking by 
trying to distribute problems according to how well a particular machine 
performs.  Fairness does not mean that everyone should get a prize, but rather
that one judge using methods that lead to informative and accurate judgments.
If it turns out that a set of benchmarks representing a suitably diverse 
range of applications runs poorly on a given machine in comparison to other 
machines, I don't consider it fair to then attempt to put that machine in a 
good light by looking specifically for benchmarks that favor that machine, 
any more than I would consider it fair to look for benchmarks that make the 
machine perform poorly relative to other machines.

In the case of the Takeuchi benchmark I get the feeling that it was chosen more
because it did no consing at all and was therefore likely to perform better 
on the Dolphin than because of any consideration of representativeness.  
Whether or not this was actually the case, I can at least raise the technical 
objection that this benchmark fails my criteria 1 and 3.  (Criterion 3 fails 
because there is a closed-form expression for Takeuchi's function, permitting 
it to be computed in constant time rather than in what appears to be 
exponential time.)

One way to come up with a benchmark that meets my three criteria but that 
should do as well as Takeuchi in making Dolphins look good would be to
implement ASSOC straightforwardly.  This would not make me any happier about 
maintaining a spirit of representativeness, but at least it would dispose of 
my technical objections to the Takeuchi benchmark.

Incidentally, the Takeuchi benchmark consumes 2.1 seconds using Franz Lisp on
Diablo.  (RPG's timings were .83 for Sail, 4.1 for the Foonly, and 11.2 for 
the Dolphin.)  For what it's worth a C version of it ran in 1.35 seconds on 
the Vax and 1.9 seconds on the Sun.

	Vaughan

∂18-Oct-81  2141	pratt@Diablo (SuNet) 	For what it's worth    
Date: 18 Oct 1981 21:40:08-PDT
From: pratt at Diablo
To: RPG@Sail, equip@DIABLO
Subject: For what it's worth

	From: Dick Gabriel <RPG at SU-AI>
	Subject: For what it's worth
	To:   equip at DIABLO  
	
	DERIV coincidentally was a very CONS intensive program. TAK
	is function call intensive and has no CONSing of any kind.
	By `fairness' I meant that it is rare in a `natural' Lisp program
	that CONSing is done in such high percentages. 

My Macsyma data didn't sway you then?  Are you saying that Macsyma is
an unnatural Lisp program or a rare one?

	If TAK in C on the SUN and VAX are interesting, how about TAK in
	FAIL on SAIL?:
		.436 seconds

Using pretty straightforward assembly language on the Sun I measured .70 
seconds.  I'm surprised the Sail/Sun gap is so small, I thought KL-10's were 
supposed to be blindingly fast.  I certainly wasn't expecting the Sun to be 62%
of a KL-10 for a function-call-intensive benchmark!

Several points:
	1. Macsyma IS a natural program: it did a lot of CONSing, but
I doubt that the percentage of CONSing to other things is anywhere
near as high as in VRP's DERIV program. This is because MACSYMA does a lot
of stuff while taking derivatives (for example, displaying the answer).
I write much Lisp code, and my code certainly does not mention several
conses per line, as Pratt's DERIV function does. The Macsyma derivative
is not Pratt's.

Any benchmark such as APPEND and DERIV is pathological. I never see such
code AS THE NORM. I see: type testing, MEMQ's, EQ's, COND's, lambda-binding,
array/vector access/assignment, function calls. The programs I deal with
do a lot of CONSing, but not at the rate that Pratt's does. If his DERIV
is natural, then the Dolphin is 260 times slower than SAIL. No one else
reports that. QED.

	2. The `art' of benchmarking is subtle. For example, what does it
mean to ``time'' a benchmark? The assembly language code I wrote was
loaded in MacLisp, and I used the timing mechanism I always do to
time things (to be consistent). The mechanism for measuring times
on SAIL is the RUNTIM UUO, which is known to measure a quantity related
to actual EBOX execution time. I'm not sure what it measures, since it
appears to count as EBOX time for my job the code run at interrupt level
while I am active (such as interrupts for character input from any user).
It may count cache filling time. Recall that the memory on SAIL consists
of 256k of 900 nanosecond memory and 1 meg of 3 microsecond memory. Until
the cache is filled I'm losing. Of course, I get charged for the execution
of RUNTIM. With a benchmark so short, this is significant.

The timing methodology counted several Lisp function calls, so
I eliminated that, redid it, and got 380 milliseconds. Even at that
I don't know what I measured. When I write up the Lisp Evaluation
results, I will read the system code, study the hardware, and try to
relate my results to some reality. Now that you know what I know
about my measurements, what did Pratt measure?
			-rpg-
∂18-Oct-81  2254	RPG@Sail (SuNet) 	Several points:       
Date: 18 Oct 1981 2246-PDT
From: Dick Gabriel <RPG at SU-AI>
Subject: Several points:    
To: equip at DIABLO

	1. Macsyma IS a natural program: it did a lot of CONSing, but
I doubt that the percentage of CONSing to other things is anywhere
near as high as in VRP's DERIV program. This is because MACSYMA does a lot
of stuff while taking derivatives (for example, displaying the answer).
I write much Lisp code, and my code certainly does not mention several
conses per line, as Pratt's DERIV function does. The Macsyma derivative
is not Pratt's.

Any benchmark such as APPEND and DERIV is pathological. I never see such
code AS THE NORM. I see: type testing, MEMQ's, EQ's, COND's, lambda-binding,
array/vector access/assignment, function calls. The programs I deal with
do a lot of CONSing, but not at the rate that Pratt's does. If his DERIV
is natural, then the Dolphin is 260 times slower than SAIL. No one else
reports that. QED.

	2. The `art' of benchmarking is subtle. For example, what does it
mean to ``time'' a benchmark? The assembly language code I wrote was
loaded in MacLisp, and I used the timing mechanism I always do to
time things (to be consistent). The mechanism for measuring times
on SAIL is the RUNTIM UUO, which is known to measure a quantity related
to actual EBOX execution time. I'm not sure what it measures, since it
appears to count as EBOX time for my job the code run at interrupt level
while I am active (such as interrupts for character input from any user).
It may count cache filling time. Recall that the memory on SAIL consists
of 256k of 900 nanosecond memory and 1 meg of 3 microsecond memory. Until
the cache is filled I'm losing. Of course, I get charged for the execution
of RUNTIM. With a benchmark so short, this is significant.

The timing methodology counted several Lisp function calls, so
I eliminated that, redid it, and got 380 milliseconds. Even at that
I don't know what I measured. When I write up the Lisp Evaluation
results, I will read the system code, study the hardware, and try to
relate my results to some reality. Now that you know what I know
about my measurements, what did Pratt measure?
			-rpg-

∂19-Oct-81  0935	RINDFLEISCH@SUMEX-AIM (SuNet) 	FYI - Other Lisp Timing Thrashes  
Date: 19 Oct 1981 0926-PDT
From: Rindfleisch at SUMEX-AIM
Subject: FYI - Other Lisp Timing Thrashes
To: Equip at SU-HPP-VAX
cc: [SUMEX] at SUMEX-AIM, ETHERNET at SUMEX-AIM, DEV at SUMEX-AIM, GRP:

   1   18 Oct  Masinter at PARC-MAXC some more races out of the past
   2   18 Oct  Masinter at PARC-MAXC timings - fyi


1 -- ************************
Mail-from: ARPANET host PARC-MAXC rcvd at 18-Oct-81 1249-PDT
Date: 18 Oct 1981 10:12 PDT
From: Masinter at PARC-MAXC
Subject: some more races out of the past
To: Rindfleisch@sumex-aim


---------------------------

Mail-from: Arpanet host MIT-MC rcvd at 26-FEB-81 2243-PST
Date: 26 Feb 1981 14:42:52-PST
From: CSVAX.fateman at Berkeley
To: CSVAX.jkf@Berkeley, jlk@mit-mc, lisp-forum@mit-mc, rz@mit-mc
Cc: CSVAX.fateman@Berkeley

 ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
 |             | UCILISP | INTERLISP | MACLISP |Franz/VAX| 
 |-------------+---------+-----------+---------+---------|
 | Interpreter |   57.0  |    26.0   |  22.8   |  65.0   |
 |-------------+---------+-----------+---------+---------|
 | Compiler    |    2.90 |    15.0   |   0.69  | 1.1 **  |
 |-------------+---------+-----------+---------+---------|

 Times are for (TAK 4 2 0), where TAK is an interesting function
 defined by Mr. Ikuo Takeuchi.
 (DEFUN TAK (X Y Z)
	(COND ((GREATERP X Y)
	       (TAK (TAK (SUB1 X) Y Z) 
		    (TAK (SUB1 Y) Z X)
		    (TAK (SUB1 Z) X Y) ))
	      (T Y) ))
(**) 5.3 with (1- x) etc [no other declarations, so greaterp is closed comp.]
     4.1 with local function declaration (fast subroutine call)
     1.1 with > open compiled 
     times on a VAX 11/780 at Berkeley, Feb. 26, 1981


------------------------------------------------------------



2 -- ************************
Mail-from: ARPANET host PARC-MAXC rcvd at 18-Oct-81 1249-PDT
Date: 18 Oct 1981 09:55 PDT
From: Masinter at PARC-MAXC
Subject: timings - fyi
To: LispGroup↑, Rindfleisch@sumex-aim
Reply-To: Masinter

---------------------------

Mail-from: Arpanet host MIT-MC rcvd at 1-MAR-81 2221-PST
Date: 2 March 1981 00:55-EST
From: Charles Frankston <CBF at MIT-MC>
Subject: timings
To: CSVAX.fateman at BERKELEY
cc: LISP-FORUM at MIT-MC, masinter at PARC-MAXC, RWS at MIT-XX,
    guttag at MIT-XX

It is rather obvious that the timings you distributed are wall times for
the Lisp Machine, whereas the Vax and MC times count only time spent
directly executing code that is considered part of Macsyma.  Ie. the
Vax and MC times exclude not only garbage collection, but operating system
overhard, disk i/o and/or paging, time to output characters to terminals, etc.

I submit comparing wall times with (what the Multics people call) "virtual
CPU" time, is not a very informative excercise.  I'm not sure if the Lisp
Machine has the facilities to make analagous measurements, but everyone
can measure wall time, and in some ways thats the most useful comparison.
Is anyone willing to try the same benchmarks on the Vax and MC with just
one user on and measureing wall times?

Also, are there yet any Lisp machines with greater than 256K words?  No
one would dream of running Macsyma on a 256K word PDP10 and I presume that
goes the same for a 1 Megabyte Vax.  The Lisp Machine may not have a time
sharing system resident in core, but in terms of amount of memory needed
for operating system overhard, the fanciness of its user interface
probably more than makes up for that.  I'll bet another 128K words of
memory would not be beyond the point of diminishing returns, insofar
as running Macsyma.

Lastly, the choice of examples.  Due to internal Macsyma optimizations,
these examples have a property I don't like in a benchmark.  The timings
for subsequent runs in the same environment differ widely from previous
runs.  It is often useful to be able to factor out setup times from a
benchmark.  These benchmarks would seem to run the danger of being
dominated by setup costs.  (Eg. suppose disk I/O is much more expensive
on one system; that is probably not generally interesting to a Macsyma user,
but it could dominate benchmarks such as these.)

I would be as interested as anyone else in seeing the various lisp systems
benchmarked.  I hope there is a reasonable understanding in the various
Lisp communities of how to do fair and accurate, else the results will be
worse than useless, they will be damaging.

------------------------------------------------------------

-------

∂19-Oct-81  1045	pratt@Diablo (SuNet) 	Several points:   
Date: 19 Oct 1981 10:43:49-PDT
From: pratt at Diablo
To: RPG@Sail, equip@DIABLO
Subject: Several points:

[Seems to me benchmarking generates more debate than information.  -vrp]

	From: Dick Gabriel <RPG at SU-AI>
	
		1. Macsyma IS a natural program: it did a lot of CONSing, but
	I doubt that the percentage of CONSing to other things is anywhere
	near as high as in VRP's DERIV program. This is because MACSYMA does a lot
	of stuff while taking derivatives (for example, displaying the answer).

The measurements I made of Macsyma's differentiator did not include the work
done during display, nor during parsing.  Nor should it if you are just 
comparing differentiation programs.  What is the "lot of stuff" Macsyma does 
WHILE taking derivatives?

	I write much Lisp code, and my code certainly does not mention several
	conses per line, as Pratt's DERIV function does.

Ok, let's see your version of DERIV.  I'll be interested to see how you manage
to use fewer conses per line.  No fair merely spreading the code over more
lines.

							  The Macsyma 
	derivative is not Pratt's.

Your argument here seems to be that because you do something Macsyma does it
too.

We can resolve the question of what Macsyma does by looking at the Macsyma 
code for DIFF, a copy of which I have requested from Jeff Golden.  (Maybe I 
should have asked Fateman, on the principle that one goes to the Soviet 
embassy to make casual inquiries about US military secrets.  I heard Moses 
was rather upset that Fateman had ported Macsyma to Franz Lisp.)

		2. ...[on the meaning of time]...
	.
	.
	.
	Now that you know what I know
	about my measurements, what did Pratt measure?

Depends on the machine, but in all cases my wristwatch is the final authority.

Sun and Dolphin:	wristwatch time over a large number of runs.  (On the
			Dolphin this seems to agree with the time returned by
			InterlispD's TIME function.)

Vax:			user cpu time as returned by the 'times' kernel call.
			The user cpu time has the following two properties:
			(1) I have observed little variation of this parameter
			over a wide range of system loads, cache usage
			considerations notwithstanding.
			(2) For very low system loads I have seen it
			come to within 90% or so of wristwatch time.
			These two observations together imply that the 'times'
			kernel call is a reliable indicator of user cpu time.

∂19-Oct-81  1143	RPG@Sail (SuNet) 	Long, silly response to Vaughn Pratt      
Date: 19 Oct 1981 1135-PDT
From: Dick Gabriel <RPG at SU-AI>
Subject: Long, silly response to Vaughn Pratt   
To: equip at DIABLO

In this message I reply to the cogent parts of Pratt's comments. The
non-sequitars in his message, which will be obvious when you briefly read
this, will be ignored, unless misunderstandings persist. Most
of you may want to delete this message now.

Ok Vaughn. Let me state this simply, so that you can understand it.

1. Your DERIV program has a higher percentage of conses than `natural' code.

2. `Natural code' means code that occurs naturally in the day-to-day
work of actual Lisp programmers. 

3. `Natural code' does not mean the `natural' codefor DERIV. (Which is
EQ to yours, and which I stated to you on several occasions already in
private).

4. Since you stated the fact that Macsyma differentiation took more conses,
I assumed that meant that it used a different program (and possibly did other
things too).

5. I never stated that your DERIV took excessive CONSes in the sense that
it was programmed badly. It, like APPEND, requires CONSes to copy the
structure, which I assume would be part of the specification.

6. Here is some `naturally' occurring code, written by Guy Steele and
myself.  It was randomly taken from pages 31, 41, and 59 (3.14159) of
ANALYZE.LSP[NIL,S1] I will point out the CONSes.  Of the approximately 153
lines below about 12 have CONSes on them. Most data structure operations
here are vector references and assignments. There are a lot of lambda's
(disguised as LETs) and control structures.  The rest of you can delete
the remainder of this message as I assume you understand my point. Vaughn,
read on:

∂19-Oct-81  1545	Jeff Rubin <JBR at S1-A> 
Date: 19 Oct 1981 1544-PDT
From: Jeff Rubin <JBR at S1-A>
To:   rpg at SU-AI

 ∂19-Oct-81  1141	RPG   via SU-AI 	RUNTIM  
can you give me the poop on what RUNTIM UUO measures on SAIL? How much
of other users, context switching, etc, do I get charged for there.
Technical details welcome.
			-rpg-

It measures EBOX milliseconds.  You might possibly be getting charged for
somebody else's spacewar or interrupt level.  I don't really remember.
You get charged for some amount of context switching and scheduling
(possibly even figuring out that it should just run you again next).
--jeff

∂21-Oct-81  1325	RPG  	Wall time
To:   pratt at DIABLO, "#TIMING.MSG[TIM,LSP]" at SU-AI    

You state:

-----
1.  Don't take any time seriously other than real (wall-clock) time.  If you
are on a time-shared computer, you have two variables to contend with: real
time and deviation from typical load.  A one-datum benchmark will measure
real time under typical load conditions, for more detail you can plot real 
time against load factor.  Personal computers are simpler with only one 
variable to worry about, real time.
-----

There are, as usual, a number of points that you slough over in this statement
which strikes me a having been given in the spirit of a aged philosopher to
young upstarts.

First, one can and must take every time reported seriously as long as there
is sufficient other material to evaluate the meaning of that time well.
For example, consider Tak(18,12,6). Suppose that there is no way to increase
the duration of a single run (for whatever reason). Since the run is so short
(on a reasonable machine) the only way to gather time via `wall clock' is
with multiple runs. In this case, unless an analysis is done on the cost
of the multiple timing control structures, you have no idea what else you
are measuring. 

Second, on a timesharing machine, the absolute cpu time (including typical
memory usage [cache misses]) along with an independent analysis of the
impact of load on real time and memory contention will provide more and
better information than a compendum of real time versus load for each
benchmark. A further piece of data would be cost of context switching
and other interrupts (and spacewar processes on SAIL). Without all of this
the timing are useless.

Third, you have included in your objective advice some personal opinion.
Namely, that personal computers are inherently better than timeshared ones.
This is under the guise of `real time' being the only serious measurement
coupled with the statement that the personal machine is simpler to measure here.
Doug Lenat made the same mistake that everyone seems to make, which is that
the personal machine is more available, and that downtime doesn't affect
them. Since you want to consider the load as impacting the worth of the
timeshared system, let me propose that the following `wall clock test' on
a normally populated personal machine environment versus a normally populatd
timesharing environment. We consider 100 random times. In each case the test
subject stands outside the door. On a signal he is to enter the room with
the terminal or computer, logs in, and runs the program. We measure transit time,
login time, loadup time and subtract them from the measurements. If someone
is at the personal machine, you wait and the clock ticks. If you
cannot log into the timeshared machine, the clock ticks. Neither may say anything
to anyone. We do this so that all time slots are represented appropriately.

I won't complicate the matter by including the incremental funding situation
when ARPA comes to see the demo of your system, and in case Ohlander watches
the one cylinder personal machine stagger though its paces while the huge timeshared
machine, with everyone gratefully off while the money is being considered,
15 MIPS' its way through the demo.

In this sense, real time on a personal machine means real time when you have
the machine in your hot little hands. So what is different about CPU time
when it is simply the measure of real time when you have the CPU in your
hot little hands? Only the scale is different. And you CAN get the timeshared
machine to yourself at critical times, and at night (which is when the average
grad student gets to use a Dolphin). If you are considering that everyone
get a personal machine, then with economics in mind, we ought to calculate the
CPU percentage of a very powerful timesahred machine as an exercise.

Lenat flames about availability. The man hours of availablility of a mainframe
is much higher than a normally populated personal environment.

			-rpg-

∂22-Oct-81  2009	George J. Carrette <GJC at MIT-MC> 	timing tests and benchmarks. 
Date: 22 October 1981 20:40-EDT
From: George J. Carrette <GJC at MIT-MC>
Subject:  timing tests and benchmarks.
To: RPG at SU-AI

Thanks. I'll give these a try. The Franz LOCF declaration
is a total crock only put into the language for use in benchmarks
it seems as you get ridiculously poor debugging if you use it.

One thing I am curious about, either quanitatively or just your feel,
is how many Lisp users (of average color) with how much memory is
the maximum on a 780 running any reasonable Lisp. I assume that NIL
will be the first reasonable Lisp on a Vax, so perhaps you can answer this.
			-rpg-
Thanks for your help.